key: cord-252708-88s32x0v authors: hawkins, devan title: differential occupational risk for covid‐19 and other infection exposure according to race and ethnicity date: 2020-06-15 journal: am j ind med doi: 10.1002/ajim.23145 sha: doc_id: 252708 cord_uid: 88s32x0v background: there are racial and ethnic disparities in the risk of contracting covid‐19. this study sought to assess how occupational segregation according to race and ethnicity may contribute to the risk of covid‐19. methods: data about employment in 2019 by industry and occupation and race and ethnicity were obtained from the bureau of labor statistics current population survey. this data was combined with information about industries according to whether they were likely or possibly essential during the covid‐19 pandemic and the frequency of exposure to infections and close proximity to others by occupation. the percentage of workers employed in essential industries and occupations with a high risk of infection and close proximity to others by race and ethnicity was calculated. results: people of color were more likely to be employed in essential industries and in occupations with more exposure to infections and close proximity to others. black workers in particular faced an elevated risk for all of these factors. conclusion: occupational segregation into high‐risk industries and occupations likely contributes to differential risk with respect to covid‐19. providing adequate projection to workers may help to reduce these disparities. whether they were likely or possibly essential during the covid-19 pandemic and the frequency of exposure to infections and close proximity to others by occupation. the percentage of workers employed in essential industries and occupations with a high risk of infection and close proximity to others by race and ethnicity was calculated. results: people of color were more likely to be employed in essential industries and in occupations with more exposure to infections and close proximity to others. black workers in particular faced an elevated risk for all of these factors. conclusion: occupational segregation into high-risk industries and occupations likely contributes to differential risk with respect to covid-19. providing adequate projection to workers may help to reduce these disparities. americans, are at an elevated risk both for contacting the disease, being hospitalized and dying from it. 1,2 different explanations have been provided to account for these disparities including people of color being more likely to live in densely populated areas, and, due to structural factors like discrimination and racism, being more likely to be socioeconomically disadvantaged and have comorbid health conditions that contribute to the risk of covid-19. 3, 4 discrimination within the healthcare system may also contribute to worse outcomes when covid-19 is contracted. occupational exposures are an important factor to consider in explaining these racial and ethnic health disparities. it has already been established that some occupations and industries are an elevated risk for covid-19, especially those employed in healthcare and other essential industries. 5, 6 some of these differences may be related to different characteristics of the occupation worked, including exposures to infections and close proximity to others. 7, 8 due to occupational segregation, people of color are often employed in an occupation that tends to be at a higher risk for occupational injuries, illnesses and fatalities. 9 to assess how this occupation segregation may contribute to racial and ethnic disparities for covid-19, this study sought to determine whether there were racial and ethnic disparities in workers employed in essential industries and in occupations with a higher risk of exposure to infections and close proximity to others. data showing employment by industry and occupation according to race and ethnicity were obtained from the bureau of labor statistics (bls) current population survey (cps) for the year 2019. 10 the only race and ethnicities included in this data were white, black, asian, and hispanic. the hispanic category included all those indicating that they were hispanic regardless of race and the individual race categories included those who indicated that they were hispanic. the brookings institute performed an analysis in which they characterized industries according to whether they were either likely or possibly part of the essential workforce according to guidelines published by the department of homeland security. 11 we matched this data with the employment data from the bls cps and calculated the percentage of workers likely or possibly employed in essential industries according to race and ethnicity. we also provide data about employment to select essential industries. based on previous analyses, 7, 8 we obtained data about the occupational risk for infections and close proximity other from the occupational information network (o*net). 12 the data for exposure to infections is based on a survey that is sent to workers that ask, "how often does this job require exposure to disease/infections?" the data for proximity to others is based on another question that asks, "to what extent does this job require the worker to perform job tasks in close physical proximity to other people?" based on the responses to this question, occupations are given a score between 0 and 100 that corresponds to their frequency of exposure to infections/proximity to others. for this analysis, high-risk occupations for infections were categorized as those with a score of 51 or higher and higher risk for proximity to others was categorized as 76 or higher. we combined these occupational scores with the employment data from the bls cps and calculated the percentage of workers with a high risk of exposure to infections and proximity to others according to race and ethnicity. we also provide data about select occupations with high exposure to infections and close proximity to others. finally, we categorized some occupations as high risk for both exposure to infections and proximity to others if they were in the high-risk group for both variables. again we calculated the percentage of workers who fell into this high-risk category by race and ethnicity and provide data about select high-risk occupations. this project was considered exempt from review by the mcphs university institutional review board because it was conducted with previously collected, deidentified data. possibly essential industries can be seen in table s1 . black and asian workers were also more likely to be employed in occupations with a high risk of infections. both black and asian workers were more likely to be employed as respiratory therapists. asian workers were more likely to be employed as registered nurses and black workers were more likely to be employed as licensed practical and vocational nurses. black workers were most likely to be employed in occupations frequently requiring close proximity to others. with respect to some of the occupations that require the most frequent proximity to others, white and asian workers were most likely to be employed as physical therapists. black, hispanic, and asian workers were most likely to be employed as personal care aids, and black and hispanic workers were most likely to be employed as medical assistants. black and asian workers were most likely to be employed in occupations with both frequent exposures to infections and proximity to others. two occupations that fell into this category-bus drivers and flight attendants-had black workers more likely to be employed in them. employment in all occupations with data available according to the risk of infection and proximity to others can be seen in table s2 . protecting frontline workers is essential in the current crisis because these workers are particularly vulnerable to the disease. such protection may also help to reduce racial and ethnic disparities in the burden of covid-19. these protections should include personal protective equipment to limit exposure to the virus, as well as protections for if a worker becomes sick, including paid sick leave and worker's compensation benefits. the author declares that there is no conflict of interest. john d. meyer declares that he has no conflict of interest in the review and publication decision regarding this article. devan hawkins conceived of this study, acquired data, and drafted the paper. he approves this version of the manuscript and agrees to be accountable for all aspects of the work. this project was considered exempt from review by the mcphs university institutional review board because it was conducted with previously collected, deidentified data. devan hawkins http://orcid.org/0000-0002-7823-8153 hospitalization rates and characteristics of patients hospitalized with laboratory-confirmed coronavirus disease 2019-covid-net, 14 states age-adjusted rates of lab-confirmed covid-19 non hospitalized cases, estimated non-fatal hospitalized cases, and total persons known to have died (lab-confirmed and probable) per 100,000 by race/ethnicity group covid-19 and racial disparities disparities in the population at risk of severe illness from covid-19 by race/ethnicity and income risk factors of healthcare workers with corona virus disease 2019: a retrospective cohort study in a designated hospital of wuhan in china italian workers at risk during the covid-19 epidemic estimating the burden of united states workers exposed to infection or disease: a key factor in containing risk of covid-19 infection the workers who face the greatest coronavirus risk. the new york times workers are people too: societal aspects of occupational health disparities-an ecosocial perspective labor force statistics from the current population survey how to protect essential workers during covid-19 supporting information additional supporting information may be found online in the supporting information section how to cite this article: hawkins d. differential occupational risk for covid-19 and other infection exposure according to race and ethnicity key: cord-024378-po1bu4v3 authors: chakraborty, sweta title: how risk perceptions, not evidence, have driven harmful policies on covid-19 date: 2020-04-20 journal: nan doi: 10.1017/err.2020.37 sha: doc_id: 24378 cord_uid: po1bu4v3 covid-19 hits all of the cognitive triggers for how the lay public misjudges risk. robust findings from the field of risk perception have identified unique characteristics of a risk that allow for greater attribution of frequency and probability than is likely to be aligned with the base-rate statistics of the risk. covid-19 embodies these features. it is unfamiliar, invisible, dreaded, potentially endemic, involuntary, disproportionately impacts vulnerable populations such as the elderly and has the potential for widespread catastrophe. when risks with such characteristics emerge, it is imperative for there to be trust between those in governance and communication and the lay public in order to quell public fears. this is not the environment in which covid-19 has emerged, potentially resulting in even greater perceptions of risk. receive significant media attention, especially compared to other disease states that are known (eg cardiovascular disease, cancer or alzheimer's disease). 11 this was true of h1ni, 12, 13 and has anecdotally so far proven true of risks are amplified or attenuated through the media through social amplification stations, which can range from individuals to the news media. amplification happens in two stages: in the initial transfer of information about the risk; and in the response mechanisms in society. 14 it is through these amplification stations that public perceptions of risks are shaped. 15, 16 these amplifications are exceptionally poignant in cases where first-hand knowledge is not tenable, such as with covid-19, and the public is therefore reliant on the media to help ascertain the risk. 17, 18 research shows that media coverage of a public health risk such as covid-19 can introduce particular risk characteristics that influence public perceptions and therefore become a factor in itself in how the risk is viewed. 19, 20 in addition to the extent of media coverage is the way a public health risk is framed in the media. as mentioned above, a new, unfamiliar disease will be prescribed far higher dread than a more familiar disease (eg lou-gehrig's disease), even if the more familiar disease is actually deadlier. 21 h1n1 was also referred to in south korean media as shin-jong or new flu. 22 before covid-19 was named, it was widely referred to in the media as the novel coronavirus. this media attention to a "new" or "novel" infectious disease frames diseases as unfamiliar and potentially catastrophic, which trigger cognitive over-attributions of frequency and probability. this along with the social amplification of risk amplifies risk perceptions and can result in the inaccurate overemphasis of primary public health impacts. given heightened public awareness of the primary public health impacts associated with the novel coronavirus, media coverage has acted as a feedback loopreinforcing the generated public awareness of these impacts. mass media have showcased epidemiological, medical and public health perspectives on the impacts 11 of covid-19primarily the lives lostat a serious detriment to understanding the big picture. observationally, there has been rare inclusion of risk or behavioural science expertise in the media. analysis of mass global media and social media coverage in the coming months and years will surely verify this observation. even being several months into the covid-19 outbreak, a comprehensive cost-benefit analysis of the various policies and combinations of policies put in place around the world has yet to be produced. policies have been based on historical data, models and disproportionate emphasis on mitigating against primary public health impacts. practitioners in risk analysis know all too well the dangers of risk analysis and policy-making in silos, and yet there has been no mainstream thorough costbenefit analysis on covid-19 in the context of a complex, global interconnected risk landscape. the global risk analysis community collectively holds a plethora of knowledge and data, as well as knowledge of where data are lacking, on the primary risks related to infectious disease (eg deaths caused by the disease), as well as secondary and tertiary impacts (eg mental health impacts, lost productivity). yet, the risk and behavioural science community has hardly been included in real-time analysis of covid-19 and its impacts. policies designed after the emergence of an outbreak carry inherent risk stemming from analysis of data that are fluid and rapidly evolving. these risks can and should be minimised by ensuring that policies across various outcome scenarios are well thought out and ready for implementation long before a crisis hits. the need for proactive preparedness for an inevitable infectious disease outbreak has been consistently maintained by the infectious disease community. this lack of preparedness has resulted in disjointed policies reacting to public perceptions of risk. specifically, a proactive risk communication plan ahead of an outbreak would have allowed for clear, consistent communication that would have quelled public fears and presumably have allowed evidence-based containment and mitigation policies to take hold. 23 because of a variety of factors (eg resource restrictions, varying country priorities, general complacency when there is not an outbreak), not only are evidencebased policies not dictating nation-state responses within and beyond political borders, they are rather being replaced with fear-based measures. consistent, clear and credible messaging helps to quell public fears. fischhoff et al found in a survey of the us public's understanding of ebola following the 2014 outbreak in west africa that the public is less likely to horribly misjudge risk when information is effectively and accurately communicated. people also have clear preferences about how they like to receive information and what sources are viewed as trustworthy. 24 while risk tolerance varies across cultures around the globe, the public generally demands governments ensure low exposure to risks, especially if they are new or unfamiliar. knowledge of this expectation is why proactive preparedness for anticipated risks is so critical. it has become painfully evident that this has not been the case for covid-19. the disjointed communication response following the outbreak has most definitely perpetuated distrust in the usa and around the world. what the risk perception and communication community has urged since the development of the us centers for disease control and prevention (cdc) crisis communication lifecycle 25honest, accurate information (ideally researched and tested) from trusted spokespeoplehas clearly been ignored at any meaningful level. the consequences of such poor preparedness and policies have real-world implications. governance decisions made in reaction to public fears err on quelling short-term hysteria at the expense of worse overall outcomes. the secondary and tertiary impacts stemming from covid-19 will go well beyond the primary public health impacts. reactive policies such as prolonged quarantines and isolations may very well increase the odds of negative outcomes. for example, brooks et al found negative psychological effects of severe social distancing measures, including posttraumatic stress symptoms, confusion and anger. they recommended for policymakers to minimise such measures and to communicate consistently throughout in order to reduce harm. 26 the ripple effects of the policies put in place to mitigate against the primary public health impacts of covid-19 may very well produce a worse overall outcomes picture. the role of the media in contributing to public perceptions of heightened risk and the reaction of policy-makers to govern based on public fears and not base-rate statistics of the disease (however fluid) will present several research opportunities in the future across multiple disciplines. this need not have been the case. it is evident that existing risk communication research has not been consistently consulted in managing the covid-19 outbreak, nor has a comprehensive risk-benefit analysis been conducted to prevent worse overall outcomes. these measures might have offset the power of the media in shaping risk perceptions, which might have in turn resulted in preventing potentially harmful policies and misallocation of precious resources in battling this global disaster. hopefully, the takeaways from covid-19 will prove helpful for the next inevitable disease outbreak. crisis and emergency risk communication as an integrative model media and social amplification of risk: bse and h1n1 cases in south korea" (2013) 22 disaster prevention & management 148 attention cycles and the h1n1 pandemic: a cross-national study of u.s. and korean newspaper coverage" (2012) 22 asian journal of communication 214. 14 re kasperson et al communication and health beliefs: mass and interpersonal influences on perceptions of risk to self and others the influence of mass media and interpersonal communication on societal and personal risk judgments communicating about emerging infectious disease: the importance of research public perceptions of everyday food hazards: a psychometric study risk analysis 487. 20 oh et al, supra again, sing-jong flu? key: cord-018907-c84t1bo5 authors: bin-hussain, ibrahim title: infections in the immunocompromised host date: 2012 journal: textbook of clinical pediatrics doi: 10.1007/978-3-642-02202-9_68 sha: doc_id: 18907 cord_uid: c84t1bo5 nan infections are considered a major cause of morbidity and mortality in immunocompromised children. the survival rate in this particular population has increased over the last 3 decades. this is mainly due to the advancement in medical technology leading to improvement in diagnosis capabilities as well as supportive care including antimicrobial therapy. immunodeficiency can be divided into primary and secondary immunodeficiency disorders. primary immunodeficiency disorders including combined t-cell and b-cell immunodeficiencies, antibody deficiency, disease of immune dysregulation, congenital defects of phagocyte number or function or both, defects in innate immunity, autoimmunity disorders, complement deficiencies, and cytokine defects. secondary immunodeficiency disorders include human immunodeficiency virus (hiv) and acquired immune deficiency syndrome (aids) -both of which lead to altered cellular immunity -dysgammaglobulinemia, defective phagocytic function or neutropenia. cancer leading to neutropenia, lymphopenia, humoral deficiencies and altered physical integrity especially with the use of chemotherapeutic agents leading to disruption barrier integrity with mucositis leading to easy access of microorganisms, solid organ transplant leading to deficiencies in cellular and phagocytic immunity, malnutrition which leads to impaired immunity, and complement activity. fever is the main manifestation and occasionally the only sign of infection in immunocompromised children. when approaching a patient with immunodeficiency in the context of infection, one needs to look at the net state of immunosuppression. the net state of immunosuppression can be evaluated by the host defense defects caused by the primary disease, dose and duration of the immunosuppressive therapy (the longer duration of immunosuppressive therapy, the higher risk of infection), presence of neutropenia, and anatomical and functional integrity because defect in the skin or mucosa can lead to easy access for the microorganisms, metabolic factors, and infection with immunomodulating viruses (hiv, hbv, hcv, cmv, ebv, and hhv-6). risk of infections can be classified as high, intermediate, and low. high risk includes hematologic malignancies, aids, hsct, splenectomized patient, and congenital immunodeficiency especially severe combined immune deficiency (scid). intermediate risk includes solid tumors, hiv/aids, and solid organ transplantation. low-risk patients include patients with corticosteroid therapy, local defects, and diabetes. the pathogens in immunocompromised patients can be predicted based on the immune defect. for example, if there is an anatomical disruption in the oral cavity it lead to infections caused by alpha hemolytic streptococci, anaerobes, candida species, and herpes simplex virus (hsv). patients with urinary catheters will be at risk for infection caused by gram negative bacteria including pseudomonas spp., enterococci, and possibly candida. if there is a skin defect including central venous catheter (cvc), the patient will be at risk of staphylococcus species (both coagulase-negative staphylococci and staphylococcus aureus, bacillus species, atypical mycobacterium, and gram-negative organism. if a defect in the phagocytic function, either quantitative or qualitative, predispose what to invasive diseases like invasive pneumonia caused by bacterial pathogens: gram-positive (staphylococci, streptococci, and nocardia species) and gram-negative bacilli (escherichia coli, klebsiella pneumoniae, p. aeruginosa), other enterobacteriaceae, and fungal pathogens like candida species and aspergillus species. patients with defective cell-mediated immunity are at risk of infections caused by intracellular pathogens (i.e., viral, fungi, mycobacterial, and intracellular bacteria). intracellular pathogens include legionella species, salmonella species, mycobacteria, and listeria species, histoplasma capsulatum, coccidioides immitis, cryptococcus neoformens, candida species, pneumocystis jiroveci, cytomegalovirus, varicella-zoster virus, epstein-barr virus, live viral vaccines (measles, mumps, rubella, and polio) and protozoal, toxoplasma gondii, strongyloides stercoralis, cryptosporidia, microsporidia, and isospora species. patients with immunoglobulin deficiencyo are at risk of sinupulmonary infection caused by s. pneumoniae, haemophilus influenzae, and cns infection from viral infections, especially enterovirus, leading to chronic meningoencephalitis as well as gastrointestinal infection due to giardiasis. patients with complement deficiency are at risk of diseases caused by s. pneumoniae, h. influenzae, and neisseria species. splenectomized patients are at risk of invasive diseases (e.g., sepsis, meningitis) caused by encapsulated organism including s. pneumoniae, h. influenzae, and neisseria meningitidis. in evaluating patients with immunodeficiency, one can predict the pathogen based on the primary immune defects, the organs involved, and the clinical presentation of the patient. for instance, staphylococcus aureus, burkholderia cepacia, serratia marcescens, pseudomonas and aspergillous infection should be considered for a chronic granulomatous diseased (cgd) patient with soft tissue infection, lymphadenitis, liver abscess, osteomyelitis, pneumonia, and sepsis. in centers dealing with immunocompromised patients, the microbiology laboratory as well as the radiology service need to be well equipped and trained in diagnosing these patients. patients with fever should be worked up with complete blood count with differential, renal, and hepatic profile, blood culture from central line (if present), and peripheral culture. chest x-rays are not done routinely unless the patients have respiratory symptoms. other investigations need to be guided by the presentation of the patient. patients with diarrhea should have stool checked for bacterial culture, ova and parasite, viral culture, rotavirus, and electron microscopy for viral studies, in addition to microspora, cryptosporidium, and isospora. in addition to chest x-ray, patients with respiratory symptoms required nasopharyngeal aspirate for rapid test for viruses and pcr multiplex -a newly developed laboratory procedure that can screen multiple viruses and other respiratory pathogens in the same setting. patients with skin lesions should have skin biopsy from the lesion, which will be sent for culture (bacterial, fungal, and mycobacterium) in addition to histopathology for gram-stain and special staining for fungal as well as acid fast stain (afb stain). there are several objectives in managing infections in immunocompromised patients. the first and foremost objective is to assure patients' survival and prevent infectious morbidity. decrease days of hospitalization and decrease exposure to multidrug resistance organism, decrease number of days of antibiotic use to minimize selection of resistance organism. modification of antimicrobial therapy in immunocompromised patients is the rule rather than the exception. timely modification of antibiotic therapy is very important to control breakthrough infection. there are several questions to be addressed to choose the effective antimicrobial therapy when evaluating patients. in addition to history and physical examination, it is important to determine which arm/arms of the immune systems that is/are affected? what the clinical syndrome/site of infection is? (to predict what are the likely pathogens), what clinical specimen(s) should be obtained (empiric/definitive therapy)? and which antimicrobial agents have predictable activity against pathogens? with these in mind, one can predict pathogen and choose the right antimicrobial agents. patients with wiskott-aldrich syndrome are at risk of bacterial pneumonia as well as sepsis with gram-positive organisms including mrsa. in this situation, medication should include agents active against gram-negative pathogen plus anti-staphylococcus agents, for example, cefotaxime or ceftriaxone plus naficillin; if mrsa or penicillin resistant s. pneumonia is suspected, one can use vancomycin. the pathogen in immunocompromised patients can be predicted by the system involved during the presentation. for example, the presentation and etiological agents in pneumonia in immunocompromised patients are different than immunocompetent persons. in evaluating pneumonia in immunocompromised patients, one needs to know that the pulmonary complication is present in up to 60% of immunocompromised patients and mortality is up to 80% of those who require mechanical ventilation. the initial evaluation needs rapid assessment of the vital signs including oxygen saturation, complete blood count with differential, renal profile, blood culture, and imaging of the lung either chest x-ray or ct scan. the organism can be predicted based on the primary immune defect. at certain point in the history, the defect in the immune system, the presence or absence of neutropenia, history of antimicrobial exposure, the presence of potential pulmonary pathogens in previous cultures, and the presence of indwelling catheters should be looked at. the pattern and distribution of radiological abnormalities can predict the pathogen and the time and the rate of progression and time to resolution of pulmonary abnormalities. for definitive diagnoses invasive procedures may be needed including bronchoalveolar lavage (bal), transbronchial biopsy, needless biopsy, thorascopic biopsy, and open lung biopsy. in obtaining the biopsy from this patient, it is very important to send it for histopathology for special staining, for viruses, bacteria, fungi, pneumocystis, mycobacterial pathogen, and also culture for viral, fungal, bacterial, and mycobacterium. other laboratory tests that will help in diagnosing pneumonia are nasal washings or swabs for direct fluorescent antibody, pcr for respiratory viruses and atypical pneumonia, culture and staining, cmv antigenemia or cmv viral load testing, aspergillus galactomannan assay, and 1,3 beta d glucan. the radiological finding in immunocompromised patient can be focal (lobar or segmental infiltrate), diffuse interstitial infiltrate or nodular (with or without cavitation). focal infiltrate can be due to gram-positive or gram-negative bacteria, legionella, mycobacteria, and fungal infection. also the noninfectious etiology includes infarction, radiation, and drug-related bronchiolitis obliterans organizing pneumonia (boop). diffuse interstitial infiltrate is caused by viral infection, pneumocystis jiroveci, less likely mycobacterium, disseminated fungal infection, atypical pneumonium including chlamydia, legionella, and mycoplasma. other noninfectious etiology causing diffuse interstitial infiltrate include edema, acute respiratory distress syndrome (ards), and drug-related radiation. for nodular infiltrate with or without cavitation the infectious etiology include aspergillus infection, and other mycoses, nocardia, bacteria either gram-positive or gram-negative, anaerobes, and mycobacterium tb, as well as noninfectious etiology including disease progression like metastasis and drug toxicity. the management of immunocompromised patients with pulmonary infiltrate will depend on the patient presentation. if the patient is acutely ill, it is very important to begin empiric therapy to cover the likely pathogen based on the presentation of the patient and the primary immune defect with simultaneously comprehensive evaluation. subsequently, therapy should be adjusted based on culture and clinical response. in providing empirical antibiotic therapy in patient with pulmonary infiltrate and defect in cell-mediated immunity one need to consider pneumocystis jiroveci, nocardia, legionella, mycoplasma, in addition to aerobic gram-positive cocci and gram-negative bacilli therefore it is advised to use trimethoprim-sulfamethoxazole, macrolides including erythromycin or clarithromycin and agent active against gram-positive and gram-negative; for example, thirdgeneration cephalosporin with or without aminoglycoside with anti-gram-positive either nafcillin or vancomycin based on the incidence of methicillin-resistant staphylococcus aureus (mrsa) and penicillin resistant streptococcus pneumoniae. the fever is defined in the context of febrile neutropenia as a single oral temperature of more than 38.3 c or more than 38.0 c for at least 1 h and is not related to the administration of pyrexial agents including blood, blood product, ivig, and pyrogenic drugs, especially ara c. neutropenia is defined as absolute neutrophil count (anc) less than 500/mm 3 or less than 1,000/mm 3 with predictive decline to less 500/mm 3 48 h. the most important risk factor is the presence of neutropenia as well as the degree and duration of neutropenia. the lower the neutrophil count, the higher the risk of infection. the longer the duration of neutropenia, the higher the risk of infection. usually, neutropenia is considered high risk if 7 days and low risk < 7 days. other risk factors include associated medical comorbidity, primary disease, and status (remission or relapse). low-risk patients are clinically defined by neutropenia as anticipated lasting less than 7 days, clinically stable, and having no medical comorbid conditions. about 50% of neutropenic patients who become febrile have established or occult infections and about 25% of patients with anc less than 100 cells/mm 3 have bacteremia. the risk varies depending on the underlying disease, for example, patients post allogenic bone marrow transplantation are at higher risk than autologous bone marrow transplantation while aml has the higher risk than all. the lowest risk is in patients with cyclic neutropenia. in evaluating a patient with fever and neutropenia, it is important to keep in mind that signs and symptoms can be muted or subtle. profoundly neutropenic patients can sometime have life-threatening infections and yet be afebrile especially if they presented with abdominal pain. careful and comprehensive physical examination is critical and should be repeated at least daily because these patients are dynamic and their condition can change rapidly. other important points in the history include the nature of chemotherapeutic agents, steroids, or other immunosuppressive agents because these can predict the degree of immunosuppression, the duration of neutropenia, and the severity of neutropenia. the history of antibiotic prophylaxis is also important because the antibiotic used as prophylaxis should be avoided in treating these patients. reviewing the recent documented infection with susceptibility can help in determining the empiric therapy. for example, if the patient has a previous infection with multidrug resistance pathogen, empiric therapy can be used to cover these pathogens. if the patient had recent surgical procedure, this means there is break of the skin and is at risk for certain pathogens including grampositive cocci (coagulase negative staphylococci and staphylococcus aureus). allergy history is an important factor in selecting empirical therapy as allergic medications need to be avoided. detailed and thorough physical examination is important with focus on certain sites that can be a portal of entry of pathogens including periodontium, pharynx, lower esophagus, lung, skin, perineum, bone marrow aspiration site, and catheter entry and exit sites. after history and thorough physical examinations, blood culture from central and peripheral lines should done in order to identify the source of infection. for example, if the blood culture is positive from the central culture but negative from peripheral culture, the likely source is the central line. if both are positive, time is needed to positively determine the source of infection. routine surveillance culture is not indicated as it is not cost effective and has low predictive value. other cultures should be guided by the sites of infection. for example, a patient with respiratory symptoms needs to have nasapharyngeal aspirate for viral study, pcr multiplex, and atypical pneumonia. patients with gastrointestinal symptoms, for example, with diarrhea, the stool needs to be sent for viral study, culture and sensitivity, ova and parasite. chest x-ray should not be done routinely in all patients with fever and neutropenia because it has low yield in patients without respiratory symptoms. it is only done in children who have respiratory symptoms. if negative, a chest ct scan to be considered to better evaluate patient not responding to therapy. most patients with fever and neutropenia have no identifiable site of infection and no positive culture results. bloodstream infection is documented in about 20% of patients with fever and neutropenia. disruption of the skin or soft tissue including vascular access or catheter insertion site can be a point of entry. in those centers, who are dealing with cancer patients, it is very important to monitor the infection rate and pathogen as well as the resistance pattern in the same center. the local data will help to select the appropriate empirical antimicrobial therapy (> table 68 .1). there is no ideal regimen because there are variables which include the risk status of the patient, microflora and their sensitivity patterns, toxicity indication, preference, and the cost. prompt initiation of broad-spectrum therapy when neutropenic patients became febrile is the key to successful management. in 1960 the mortality rate was up to 80% initially but with the introduction of empiric therapy against gram-negative organism the mortality rate now is close to 5%. there is no ideal regimen because this can be determined based on the isolate and its susceptibility in the same center as each center for example, one cannot extrapolate from different centers the likely pathogen, the same thing that a center can have a different pathogen and different susceptibility pattern in adult versus pediatric population with febrile neutropenia ( > table 68 .2). monotherapy and combination therapy has equal efficacy. the monotherapy needs to have antipseudomonal activities including antipseudomonal penicillin with or without beta-lactamase inhibitor, carbapenem, and third-or fourth-generation antipseudomonal cephalosporins. the combination therapy includes antipseudomonal betalactam with aminoglycoside. both monotherapy and combination therapy have equal efficacy but it is important to look at the local data to be able to predict the empiric therapy either combination therapy or monotherapy. it is worth stressing that vancomycin should not be used routinely for empiric therapy in febrile neutropenia and there is a special indication for vancomycin. the vancomycin indication includes hemodynamic instability or other evidence of severe sepsis, pneumonia documented radiographically, positive blood culture for gram-positive bacteria before final identification and susceptibility testing is available, clinically suspected catheter-related infections (e. g., chills or rigors with infusion through catheter and cellulitis around the catheter entry/exit site), skin or soft-tissue infection at any site, colonization with methicillin-resistant staphylococcus aureus, vancomycin-resistant enterococcus, or penicillin-resistant streptococcus pneumoniae, and severe mucositis, if fluoroquinolone prophylaxis has been given and ceftazidime is employed as empirical therapy. if the patient started empirically on vancomycin the need for continuation of vancomycin should be re-assessed on daily basis. overuse of vancomycin in more than 90%, and selection for resistant organism and emergence of vancomycin resistance enterococci. the factors influencing antimicrobial selection include the types of bacterial isolates found in the institution, antibiotic susceptibility patterns, drug allergies, presence of organ dysfunction, chemotherapeutic regimen whether the patient was receiving prophylactic antibiotics, and condition of the patient at diagnosis, for example, presence of signs and symptoms at initial evaluation and presence of documented sites requiring additional therapy. the center-specific factors include the patterns of resistance, effect on microbial ecology, high presence of vancomycin resistance enterococci (vre), or extended spectrum beta-lactamase (esbl) producing organism. the patient-specific factors including recent antibiotic use such as current prophylaxis as drug allergy, and the underlying organ dysfunction. the signs and symptoms present at the initial evaluation determine. in the recent year more interest in the outpatient therapy for patient with fever and neutropenia. the advantages of ambulatory management of febrile patients with neutropenia especially those at low risk include lower cost particularly with oral outpatient therapy, fewer superinfections caused by multidrug-resistant nosocomial pathogens, improved quality of life for patient, greater . not recommended for routine use *other antimicrobials (aminoglycosides, fluoropuinolone, and/or vancomycin) may be added to initial regimen for complicated presentation or if resistance is suspected or proven infections in the immunocompromised host convenience for family or other caregivers, and more efficient utilization of valuable and expensive resources. the disadvantage includes the potential risk for developing serious complications such as septic shock at home, risk of noncompliance particularly with oral therapy, false sense of security or inadequate monitoring for response to therapy or toxicity, and the need to develop a team and infrastructure capable of treating substantial numbers of low-risk patients. there are several requirements for successful outpatient treatment programs for patients with febrile neutropenia which include institutional infrastructure and support, a dedicated and experienced team of healthcare providers, availability of institution-specific epidemiological data and susceptibility and resistance data, microbiologically appropriate treatment regimen, frequent follow-up monitoring of outpatient, adequate transportation and communication capabilities, and access to management team 24 h a day, 7 days a week. there are certain clinical events or manifestations that require modifying the initial antimicrobial therapy; for example, if a patient has breakthrough bacteremia and if gram-positive is isolated (add vancomycin especially if there is a risk of mrsa or pneumococcal resistance penicillin). if gram-negative organism is isolated consider resistant gram-negative and can change the regimen or broaden the coverage (carbapenems if the data in the center showed that the carbapenems has better sensitivity than cephalosporin or beta-lactam antibiotic). if the patient has catheter-associated soft tissue infection, vancomycin should be added. patients with severe oral mucositis or necrotizing gingivitis are at risk of anaerobic bacteria as well as viruses; add agent that is active against beta-lactamase-producing anaerobic bacteria including clindamycin, metronidazole, and acyclovir should be considered. if the patient has diffuse pneumonia, continue with the broad-spectrum anti-gramnegative coverage (add trimethoprim-sulfamethoxazole and macrolide to the therapy). increasing neutrophil count on patients who developed new infiltrates while on antibiotic can be related to the recovery of neutropenia. if the patient is stable observe if the neutrophil count is not rising, antifungal therapy should be considered as the patient is at risk for fungal infection. in addition to other evaluation aspergillus galactomannan and b-d glucan (fungitell) should be done with chest ct scan. depending on the ct scan findings bronchoalveolar lavage or lung biopsy should be considered. patient with prolonged fever and neutropenia needs to be observed if recovery of neutropenia is not imminent. antifungal therapy can include either regular amphotericin b, or lipid formulation of amphotericin b including liposomal amphotericin b (ambisome) or amphotericin b lipid complex (ablc), caspofungin or voriconazole depending of the availability of medications and epidemiology of the institution. infections in the immunocompromised low risk episodes of fever and neutropenia in pediatric oncology: is outpatient oral antibiotic therapy the new gold standard care? predicting events in children with fever and chemotherapy-induced neutropenia: the prospective multicenter spog 2003 fn study cefepime and death: reality to rescue clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: 2010 update by the infectious society of america etiology and clinical course of febrile neutropenia in children with cancer risk prediction in pediatric cancer patients with fever and neutopenia advances in the management of primary immunodeficiency immunocompromised children: conditions and infectious agents emprical oral antibiotic therapy for low risk febrile cancer patients with neutropenia meta-analysis of a possible signal of increased mortality associated with cefepime use febrile neutropenia: a critical review of the initial management fever and neutropenia in pediatric patients with cancer ) b lactam monotherapy versus b lactam-aminoglycoside combination therapy for sepsis in immunocompetent patients: systemic review and meta-analysis of randomized trials fever in immunocompromised patients bloodstream infections in hematology: risks and new challenges for prevention pediatric cancer patients in clinical trials of sepsis: factors that predispose to sepsis and stratify outcome key: cord-266526-8csl9md0 authors: li, shuai; xu, yifang; cai, jiannan; hu, da; he, qiang title: integrated environment-occupant-pathogen information modeling to assess and communicate room-level outbreak risks of infectious diseases date: 2020-10-24 journal: build environ doi: 10.1016/j.buildenv.2020.107394 sha: doc_id: 266526 cord_uid: 8csl9md0 microbial pathogen transmission within built environments is a main public health concern. the pandemic of coronavirus disease 2019 (covid-19) adds to the urgency of developing effective means to reduce pathogen transmission in mass-gathering public buildings such as schools, hospitals, and airports. to inform occupants and guide facility managers to prevent and respond to infectious disease outbreaks, this study proposed a framework to assess room-level outbreak risks in buildings by modeling built environment characteristics, occupancy information, and pathogen transmission. building information modeling (bim) is exploited to automatically retrieve building parameters and possible occupant interactions that are relevant to pathogen transmission. the extracted information is fed into an environment pathogen transmission model to derive the basic reproduction numbers for different pathogens, which serve as proxies of outbreak potentials in rooms. a web-based system is developed to provide timely information regarding outbreak risks to occupants and facility managers. the efficacy of the proposed method was demonstrated by a case study, in which building characteristics, occupancy schedules, pathogen parameters, as well as hygiene and cleaning practices are considered for outbreak risk assessment. this study contributes to the body of knowledge by computationally integrating building, occupant, and pathogen information modeling for infectious disease outbreak assessment, and communicating actionable information for built environment management. this study aims to develop a framework for room-level outbreak risk assessment based on 105 integrated building-occupancy-pathogen modeling to mitigate the spread of infectious disease in 106 buildings. the rationale is twofold. first, buildings are highly heterogeneous with a variety of 107 compartments of distinctive functionalities and characteristics, providing diverse habitats for 108 humans and various pathogens [17, 18] . modeling the pathogen transmission and exposure 109 within a building at the room level will provide useful information at an unprecedented resolution 110 to implement appropriate disease control strategies. second, the spread of infectious diseases 111 can be mitigated if occupants and facility managers have adequate and timely information 112 regarding the outbreak risks within their buildings. communicating actionable information to 113 occupants and facility managers through an easily accessible interface will help occupants to 114 follow hygiene and social distancing practice, and help facility managers to schedule disinfection 115 for rooms with high outbreak risks. 116 117 to address the knowledge gaps, a novel environment-occupant-pathogen modeling framework 119 and a web-based information visualization system are developed to assess the outbreak risks 120 and mitigate the spread of infectious diseases in buildings ( fig. 1) . first, to assess the outbreak 121 risks, the fomite-based pathogen transmission model proposed in [24] is adopted in this study. 122 the limitation of the model is that the environmental parameters and occupant characteristics 123 are not automatically extracted and incorporated in the model, hindering the computation of the 124 spatially-varying environmental infection risks in buildings. to overcome this limitation, bim is 125 exploited to automatically retrieve venue-specific parameters including building characteristics 126 and occupancy information that are relevant to pathogen transmission and exposure. then, the 127 extracted building and occupant parameters are used with pathogen-specific parameters in a 128 human-building-pathogen transmission model to compute the basic reproduction number r 0 for 129 each room in a building. r 0 is used as a proxy to assess the outbreak risks of different infectious 130 diseases. second, a web-based system is developed to enable information visualization and 131 communication in an interactive manner to provide guidance for occupants and facility 132 managers. this study innovatively establishes the computational links among building, occupant, 133 and pathogen modeling to predict outbreak risks. the risk prediction for spatially and 134 functionally distributed rooms in a building provides useful information for end-users to combat 135 and respond to the spread of infectious diseases, including the seasonal flu and covid-19. the 136 developed method and system add a health dimension to transform the current building 137 management to a user-centric and bio-informed paradigm. 138 139 fig. 1 in this study, a computational tool is developed based on dynamo [29] to extract the geometry 232 and properties of each room in a building, and to compute the corresponding venue-specific 233 parameters. fig. 4 shows the workflow of the information retrieval process. lines in fig the workflow for information retrieval is detailed as follows. 241 242 the steps for extracting room parameters are: thereafter, the total furniture area in each room (named ) is calculated by summing 282 up the surface area of all furniture inside the room. in epidemiology literature, r 0 is one of the most widely used indicators of transmission intensity 384 to demonstrate the outbreak potential of an infectious disease in a population. commonly, r 0 > 385 1 means the epidemic begins to spread in the population, r 0 < 1 means the disease will 386 gradually disappear, and r 0 = 1 means the disease will stay alive and reach a balance in the 387 population. with the increase of r 0 , the outbreak risk will increase, and more severe control 388 measures and policies will be needed [37] . in this study, we categorize the level of outbreak risk 389 into low, mild, moderate, and severe based on the range of r 0 . specifically, the risk is low when 390 r 0 < 1; the risk is mild when 1 ≤ r 0 < 1.5 because there is a fair chance that the transmission 391 will fade out as , is not much larger than 1 [38]; the risk is moderate when 1.5 ≤ r 0 < 2, 392 indicating an epidemic can occur and is likely to do so [39, 40] ; and the risk is severe when r 0 > 393 2 and immediate actions should be taken by facility managers, such as cleaning the surfaces, to 394 reduce the risk. 395 396 to better communicate the infection risk to occupants and facility managers, a web-based 398 system was developed to visualize the outbreak risk of different pathogens in each room within 399 a building. fig. 5 illustrates the architecture of the web-based system, which consists of four 400 modules, i.e., data management, model derivative, web application, and user. three add-in functions were developed to help users visualize the interior layout of the building 441 and color-coded rooms with their corresponding risk levels, as well as search specific room-442 related disease outbreak risk information. the first add-in function is "vertical explode", which is 443 used to view each level of the building. this function can help the user visualize the interior and 444 room layout. the facility users can also use this function to visualize the outbreak risk of rooms 445 on each floor and take appropriate practices. for facility managers, the "vertical explode" 446 function enables them to obtain a holistic view of risk distribution at each level and take 447 informed actions, such as limiting the number of occupants and implementing cleaning and 448 disinfection protocols, to control the spread of the disease. this function is integrated with the 449 web-based system, and clicking buttons were created to activate and deactivate it. the second 450 function is "room filtering", which is used to highlight rooms at different risk levels for a specific 451 pathogen. the user needs to first select one of the three pathogens from the dropdown menu: 452 sars-cov-2, influenza, and norovirus. thereafter, the user can set a risk threshold to highlight 453 rooms with r 0 greater than a specific value. in addition, different highlighting colors are used to 454 represent different infection risk levels. low, mild, moderate, and severe risks are represented 455 by color green, blue, celery, and red, respectively. the third function is "room query", which 456 enables the user to search for a specific room and retrieve infection risk for the three pathogens. 457 the "room query" function is displayed as a search box on the web-based system. the users 458 can easily find the potential risk of a specific room using this function. finally, end users can 459 access the web-based information communication system and obtain information about 460 outbreak risk in each room of the building through various channels, including laptops, 461 smartphones, and tablets. 462 463 a hypothetical case study is used as an example to demonstrate the efficacy of the proposed 465 framework and the newly developed web-based system. the building information model of a 466 six-floor school building with 221,000 square feet is used. the building contains classrooms and 467 faculty and graduate assistant offices. 468 469 the room types considered in the case study include offices and classrooms. five offices and 471 five classrooms were selected. the venue-specific parameters of the rooms are extracted and 472 listed in table 3 , and the computed r 0 values of the three diseases are listed in table 4. 473 474 table 3 venue-specific parameters in representative rooms 475 from table 4 , the values of r 0 vary across different rooms and different diseases. r 0 values in 482 offices are smaller than the values in classrooms, which stems from the small occupancy and 483 the low rate of fomite touching in offices compared to those in classrooms. for influenza, the r 0 484 values in all the rooms are less than 1, indicating that influenza is unlikely to outbreak in the 485 building through the fomite-mediated transmission. this could be partially explained by the 486 relatively short infectious period, high inactivation rate in hands, low hand-to-fomite pathogen 487 transmission efficiency, and relatively low infectiousness with the same amount of pathogens. 488 for covid-19, the r 0 values in all rooms are higher than those of influenza, and the risk in 489 classroom 4 reaches a moderate level, indicating that covid-19 has the potential to outbreak 490 in the classroom. covid-19 has a relatively high outbreak risk in most cases because it has a 491 high shedding rate, small surface inactivation rate, and high transfer efficiency from fomites to 492 hands. for norovirus, the r 0 values are high in most classrooms, which might be because of its 493 high infectivity, long infection period, and high hand-to-fomite transmission efficiency compared 494 to the other two diseases. this finding also aligns with the trend obtained in [24]. the above 495 results prove that the outbreak risk of an infectious disease is influenced by both venue-specific 496 and pathogen-specific parameters, which highlights the significance of integrating bim and the 497 pathogen transmission model in assessing spatial-varying disease outbreak risk. 498 499 sensitivity analysis was further conducted to evaluate the influence of the rate of fomite 500 touching (+ e ) and the shedding rate (%) of sars-cov-2 on r 0 based on the estimated ranges 501 of the two parameters (listed in table 2 ). fig. 6 illustrates the changes in r 0 with the increase of 502 + e for all three diseases in both classrooms and offices. from fig. 6 , the disease outbreak risk 503 increases as the increase of + e . the values of r 0 for norovirus and covid-19 in classroom 1, 2, 504 and 4 may exceed 1 with the increase of + e . on the other hand, the infection risk in offices and 505 that for influenza in classrooms will remain low even occupants touch objects in the rooms more 506 frequently. therefore, it is particularly important to educate students in classrooms with 507 relatively high occupancy to not touch the common areas frequently. fig. 7 illustrates the 508 changes in r 0 of covid-19 with varying shedding rates. from the figure, % has a significant 509 impact on the outbreak risk of covid-19 in classroom 1, 2, and 4. therefore, for classrooms 510 with relatively large occupancy, control strategies should be taken to reduce pathogen shedding 511 from the occupants, such as using face masks, and covering the mouth when coughing. applied to different rooms to reduce the risks to an acceptable low level. cleaning the surface 532 five times per day will decrease r 0 by over 50%, compared to no surface cleaning. considering 533 the ongoing outbreak of covid-19, classrooms with high occupancy (e.g., classroom 4) should 534 be given particular attention on surface cleaning. cleaning surfaces at least two times per day is 535 needed to achieve a low risk level. for norovirus, classrooms with relatively large occupancy 536 (e.g., classroom 1, 2, and 4) will require more frequent surface cleaning to reduce the outbreak 537 risk to the low level. other complementary strategies, such as increasing hand washing and 538 limiting occupancy, should be adopted to maintain a low level of outbreak risks. 539 540 as shown in fig. 10 , room filtering and room query functions can help the user easily locate 556 rooms with high risk and query risk information for a specific room. specifically, fig. 10 (a) 557 shows an exemplary output of the room filtering function that highlights the rooms with r 0 value 558 greater than 1 for covid-19. fig. 10 (b) displays an example of the room query function in the 559 web system. the pathogen risk information for influenza, norovirus, and covid-19 is retrieved 560 with corresponding recommendations. with the web-based information communication system, 561 facility managers can take important measures to control the spread of diseases, such as 562 designing appropriate cleaning and disinfection strategies, promoting hand hygiene, reducing 563 maximum occupancy, and accommodating facility usage schedule based on risk distribution 564 across rooms within the building. for instance, deep cleaning and disinfection are required for 565 rooms with severe outbreak risk. in addition, facility managers can post signs at these high-risk 566 areas to remind occupants to take essential practices such as social distancing and hand 567 hygiene. the web-based system will also keep facility users, including teachers, students, and 568 other staff, aware of up-to-date outbreak risk information within the building, and thus taking 569 informed actions to avoid further spread of diseases. for example, facility users can avoid 570 entering rooms with high outbreak risk. 571 572 4. discussion 573 the results and insights derived from the analysis have important implications on adaptive built 574 environment management to prevent infectious disease outbreak and respond to on-going 575 pandemic. due to varying building characteristics, occupancy levels, and pathogen parameters, 576 the microbial burdens and outbreak risks differ significantly even in the same building, 577 highlighting the need for spatially-adaptive management of the built environment. the proposed 578 method automates the batch process for simulation and prediction of outbreak risks for different 579 pathogens at the room level, and visualizes the risks for adaptive management. the results on 580 outbreak risks at room level enables the paradigm for spatially-adaptive management of the 581 built environment. with the new streams of risk information, customizable interventions can be 582 designed. for instance, in consistent with the practice during the covid-19 pandemic, reducing 583 the accessible surfaces in rooms and restricting the occupancy in the room are some of the 584 effective strategies to reduce the outbreak risks. the spatially-varying risk information can also 585 guide the facility managers to pay close attention to high-risk areas by adopting more frequent 586 disinfection practices. 587 588 a bim-based information system is developed to extract the necessary information for modeling 589 infection within buildings, and to visualize the derived information in an easy-to-understand and 590 convenient way through web pages. as such, the information-driven interventions could 591 alleviate the pathogenic burdens in the buildings to prevent the spread of infectious diseases. 592 providing information to end-users is critically important for them to change behaviors. human 593 behavior plays an important role in the transmission of pathogens such as the sars-cov-2. 594 changing behaviors is critical to preventing transmission. providing timely and contextual 595 information can be a promising option to motive the change of human behaviors. with the room-596 level outbreak risk information, the users could be motivated or persuaded by the visualized 597 risks to practice appropriate behaviors such as wearing a mask, social distancing, and hand-598 washing. the facility managers can use the information to conduct knowledge-based 599 management, such as limiting the occupancy in the room, managing crowd traffic, and 600 rearranging room layout. 601 602 this study has some limitations that deserve future research. first, the model does not consider 603 factors such as sunlight exposure, humidity, and airflow that may impact the persistence and 604 transmission of pathogens in built environments. this is mainly because the quantitative 605 impacts of these factors on pathogen persistence and transmission are largely ambiguous, if not 606 unknown. if these impacts can be quantified and the environmental parameters can be 607 monitored and modeled in bim, our proposed framework can be extended to incorporate these 608 factors. second, the computation of r 0 only considers the fomite-mediated transmission, and 609 does not consider the airborne and close contact transmission. microbial pathogens may have 610 different transmission routes, including airborne, close-contact, and fomite-based transmission. 611 this study focused on fomite-based transmission to illustrate the modeling approach for 612 assessing the outbreak risks, and demonstrate the efficacy of the developed information system 613 to guide infection control practices and building operations. to fully assess the exposure risks 614 and outbreak potentials, all important routes need to be considered. in addition, the outbreak 615 potentials of a variety of pathogens can be considered together to develop an aggregate index, 616 which could be more intuitive for occupants and facility managers who are not public health 617 experts. third, the system mainly relies on static models and does not make full use of dynamic 618 and real-time data regarding built environments and occupant behaviors such as presence and 619 interactions with objects. in future studies, the internet of things sensors can be installed in the 620 buildings and algorithms can be developed to retrieve dynamic data for integration with the 621 models for accurate and robust risk estimation. fourth, the web-based system can be further 622 improved by connecting it with smart devices such as robots for automated cleaning and 623 disinfection and smartphones for precision notifications. 624 625 this study creates and tests a computational framework and tools to explore the connections 627 among built environment, occupant behavior, and pathogen transmission. using bim-based 628 simulations, building-occupant characteristics, such as occupancy and accessible surface, are 629 extracted as venue-specific parameters. the fomite-mediated transmission model is used to 630 predict the contamination risks in the built environment by calculating a room-by-room basic 631 reproductive number r 0 , based on which the level of infection risk at each room is characterized 632 into low, mild, moderate, and severe. a web-based system is then created to communicate the 633 infection risk and outbreak potential information within buildings to occupants and facility 634 managers. the case study demonstrated the efficacy of the proposed methods and developed 635 systems. practically, the method and system can be used in a variety of built environments, 636 especially, schools, hospitals, and airports, where transmission of infectious pathogens is of 637 particular concern. the outbreak risks predicted at room resolutions can inform the facility 638 managers to determine room disinfection and cleaning frequency, schedule, and standard. in 639 addition, appropriate operational interventions including access control, occupancy limits, social 640 distancing, and room arrangement (e.g. reducing the number of tables and chairs) can be 641 designed based on the derived information. the occupants can access the useful information 642 via webpage to plan their visit and staying time in the facilities, and practice appropriate 643 personal hygiene and cleaning practice based on the information. microbial exchange via fomites and implications for human health how quickly viruses can contaminate buildings --from just a single 658 doorknob the occurrence of influenza a virus on household and day care 661 center fomites an interactive web-based dashboard to track covid-19 in 663 real time prolonged infectivity of 666 sars-cov-2 in fomites exaggerated risk of transmission of covid-19 by fomites role of fomite contamination during an 671 outbreak of norovirus on houseboats epidemiologic and molecular trends of "norwalk-like viruses" 675 associated with outbreaks of gastroenteritis in the united states microbiology of the built environment model analysis of fomite 680 mediated influenza transmission informing 683 optimal environmental influenza interventions: how the host, agent, and environment 684 alter dominant routes of transmission dynamics and control of infections 687 transmitted from person to person through the environment bacterial transfer to fingertips during 691 sequential surface contacts with and without gloves, indoor air. (2020) ina.12682 evaluating a transfer gradient 695 assumption in a fomite-mediated microbial transmission model using an experimental and 696 bayesian approach physical factors that affect 699 microbial transfer during surface touch architectural design drives the biogeography of indoor bacterial communities architectural design influences the diversity and 708 structure of the built environment microbiome the diversity and distribution of fungi on 711 residential surfaces microbiota of the indoor environment: 713 a meta-analysis bacterial communities on classroom surfaces 716 vary with human contact what have we learned about the microbiomes of indoor environments? bim handbook: a guide to building 720 information modeling for owners, managers, designers, engineers and contractors a conceptual framework 723 for integrating building information modeling with augmented reality fomite-mediated transmission as a sufficient pathway: a 727 comparative analysis across three viral pathogens 11 medical and health sciences 1117 728 public health and health services building information modeling (bim) for existing 731 buildings -literature review and future needs determining the level of development for bim implementation in 734 large-scale projects: a multi-case study transmission of influenza a in a student office based on realistic 737 person-to-person contact and surface touch behaviour risk of fomite-mediated transmission of sars-cov-2 in child 741 daycares, schools, and offices: a modeling study visual scripting environment for designers -dynamo 747 predicting infectious sars-cov-2 from diagnostic samples deducing the dose-response relation for coronaviruses from 750 covid-19, sars and mers meta-analysis results estimated surface decay of sars-cov-2 (virus that causes covid-753 19) a study of the probable transmission routes of 756 mers-cov during the first hospital outbreak in the republic of korea, indoor air cov-2: clinical presentation, infectivity, and immune responses pandemic potential of a strain of 769 influenza a (h1n1): early findings, science on the definition and the computation of 772 the basic reproduction ratio r0 in models for infectious diseases in heterogeneous 773 populations unraveling r0: considerations for public health 775 applications assessing the pandemic potential of mers-cov nuanced risk assessment for emerging infectious 780 diseases 782 interventions to mitigate early spread of sars-cov-2 in singapore: a modelling study web development with mongodb and nodejs hand hygiene and surface cleaning should be paired 789 for prevention of fomite transmission, indoor air key: cord-011325-r42hzazp authors: stowe, julia; andrews, nick; miller, elizabeth title: do vaccines trigger neurological diseases? epidemiological evaluation of vaccination and neurological diseases using examples of multiple sclerosis, guillain–barré syndrome and narcolepsy date: 2019-10-01 journal: cns drugs doi: 10.1007/s40263-019-00670-y sha: doc_id: 11325 cord_uid: r42hzazp this article evaluates the epidemiological evidence for a relationship between vaccination and neurological disease, specifically multiple sclerosis, guillain–barré syndrome and narcolepsy. the statistical methods used to test vaccine safety hypotheses are described and the merits of different study designs evaluated; these include the cohort, case-control, case-coverage and the self-controlled case-series methods. for multiple sclerosis, the evidence does not support the hypothesized relationship with hepatitis b vaccine. for guillain−barré syndrome, the evidence suggests a small elevated risk after influenza vaccines, though considerably lower than after natural influenza infection, with no elevated risk after human papilloma virus vaccine. for narcolepsy, there is strong evidence of a causal association with one adjuvanted vaccine used in the 2009/10 influenza pandemic. rapid investigation of vaccine safety concerns, however biologically implausible, is essential to maintain public and professional confidence in vaccination programmes. vaccination is one of the most effective public health interventions successfully controlling many serious infectious diseases and saving millions of lives globally each year [1] . however, as with any medical treatment or drug, vaccination can never be entirely risk free in terms of unwanted side effects. an important feature of vaccination is that unlike most therapeutic drugs, vaccines are given prophylactically to healthy individuals, often young children. when an event occurs shortly after vaccination in an otherwise healthy individual without an obvious cause, it is tempting to attribute its occurrence to the preceding vaccination. the assumption of a causal association with a vaccine from purely a temporal association is often incorrect as unrelated events will occur by chance irrespective of vaccination. it can be hard to disentangle these temporal associations when there is a strong perception that a temporal association is necessarily evidence of a causal association and the onset of the condition is insidious and its timing relies on patient or parental recall [2] . even if only based on a temporal sequence of events, it is important that such safety concerns are rapidly investigated with robust epidemiological studies to allow mitigation procedures to be put in place if an association is confirmed or, if unfounded, to have the necessary evidence to sustain public confidence in the vaccination programme without which coverage drops and disease control is lost. in this article, which focusses on the evaluation of the relationship between vaccination and neurological diseases, the statistical approaches to causality assessment are first discussed and their relative merits evaluated, followed by an overview of a selection of vaccine safety studies involving neurological disease with differing conclusions; some of the included studies have shown a small elevated risk, others none, two lack evidence to draw any definitive conclusion and one provides robust evidence of causal association. to establish whether the signal seen is associated with the vaccine and to quantify the risk, a formal epidemiological study is usually needed. this requires a pre-specified protocol detailing the population under study, the period after vaccination for which an elevated risk is suspected, and the methods for case identification and statistical analysis. most importantly, the ascertainment of the condition of interest must be unbiased with respect to vaccination history [3] . the following statistical methods have been used most commonly to address vaccine safety questions and to control for the inherent biases in the population and data under study. although these methods aim to address confounding, it can be difficult to fully control for this in an observational study. an assessment of the likelihood of residual confounding/ bias and its potential extent is an important consideration when weighing up the strength of a study and drawing a conclusion with regard to causality. in a cohort study, the risk of developing the condition is compared in the vaccinated and unvaccinated individuals in the study population. cohort studies need to be very large to detect rare vaccine adverse events and this often makes them impractical for a prospective study. retrospective cohort designs can use routinely collected data and cases identified by clinical coding but this study design may be disadvantaged by the need to collect a large number of confounding variables. factors such as underlying illnesses, sociodemographic characteristics, and propensity to consult may differ between unvaccinated and vaccinated individuals and would therefore need to be adjusted for in the analysis as they can independently determine the likelihood of the adverse event under study. the advantage is that an entire population is studied and relative and absolute incidence estimates can be reported. in addition, once the cohort is defined, several outcomes can be assessed within the same study design. when studying a vaccine that is given as part of a national schedule and high coverage is achieved, the small unvaccinated group may differ from the vaccinated group in ways that are difficult to capture and control for in an adjusted analyses. additionally, care must be taken to ensure unvaccinated cases are indeed unvaccinated and the data are not missing. this can occur when regional vaccine datasets are used and the transfer and sharing of data are not comprehensive. cohort studies are feasible for vaccine safety studies when data from a whole country or region can be used. an example of this is in denmark where danish residents contribute to a large linked dataset consisting of demographic factors that are linked to health information including potential confounding variables [4] . the self-controlled case-series method (sccs) was designed for rapid unbiased assessment in vaccine safety studies using available disease surveillance data that may not be amenable to cohort analysis. the method only requires information on the timing of cases during a defined observation period and their vaccination status [5] . the cases act as their own controls as the incidence of the event in pre-defined risk periods following vaccination is compared to the incidence outside the risk period generating a relative incidence (ri) measure ( fig. 1) . a significant advantage of the method is that confounding factors that do not vary over the observation period, such as co-morbidities or sociodemographic status, are automatically controlled for. adjustment for timevarying confounders such as age is also possible by dividing up the observation period further into age categories. it has been demonstrated that the power of the sccs method is nearly as good as a cohort study when uptake is high and risk intervals are short, and it is superior to that of a casecontrol study [6] . the self-controlled case-series method has been used by public health england to address many pertinent vaccine safety concerns [7] [8] [9] [10] . this design has been chosen both because of its simplicity and ability to control for individual level confounding and also because a national cohort of cases cannot be easily defined using the national hospital data as no national immunisation register is available. unlike a cohort study, the sccs method does not provide absolute risk estimates. however, if the number of doses given to the population from which the cases are derived is known and if ascertainment is complete, then absolute risks can be estimated and the cases attributable to vaccination estimated from the magnitude of the ri. a case-control study requires smaller numbers than a cohort study but the same confounding and bias can occur and it also has the added difficulty of selecting the correct controls for comparison. for vaccinations given in the short age range in the first and second year of life or during a short calendar period to target ages, close matching of the controls on date of birth is required. prior vaccination status is then compared between cases and controls using the date of onset in cases as a reference date. to obtain enough power to assess the required risk, multiple controls per case are often needed and defining appropriate criteria for the selection of controls can be problematic. while it is important to ensure that controls are similar to cases on characteristics such as age and geographical location that can independently affect vaccination status, over matching is a risk if too many extraneous variables are included in the matching, resulting in loss of efficiency and potentially introducing bias. a case-control study does not provide absolute risk estimates, rather it measures the odds of vaccination in cases compared to controls. however, as with the sccs method, if the number of doses given to the population from which the cases are derived is known and if ascertainment is complete, then absolute risks can be estimated and the cases attributable to vaccination estimated from the magnitude of the odds ratio. the case-control design has been used where controls can be selected from the same population as cases and can be readily matched on the relevant variables. as a case-control approach is more efficient than the cohort approach, it is often used on large databases that could be used for a cohort analysis. examples include the vaccine safety datalink in the usa, which accesses complete patient records from health maintenance organisations, or studies using hospital admission data bases linked to national immunisation registers such as the australian childhood immunisation register [11] [12] [13] . the case-coverage design has recently been used in vaccine safety studies [14, 15] . it is similar to the screening method, which until recently has been primarily used for vaccine effectiveness assessment [16] , although it is more limited in terms of adjustment for possible confounders than the sccs method. each case is matched to a population coverage estimate and this is then used to see if the number of cases vaccinated is greater than expected. the method uses logistic regression on the odds of vaccination with an offset for the log-odds of the matched population coverage, thus it is similar to a case-control study with thousands of controls per individual. this design has been used by public health england to assess the risk between as03 adjuvanted h1n1 pandemic vaccine pandemrix™ and narcolepsy. because pandemrix™ was rolled out over a short period of time in the winter season of 2009/10 targeting children of different ages according to whether they had certain co-morbid conditions, it was necessary to have detailed information on dates of vaccination and dates of birth to estimate the population coverage for each narcolepsy case by age and time period. this was available from a representative subset of general practices in england, which also provided information on co-morbidities, the only other variable considered as a potential confounder [14] . in the first study assessing the risk of narcolepsy in children [14] , both the sccs method and the case-coverage design was used. the results from the sccs method were found not to be clear as this method requires the incidence in a pre-specified risk period after vaccination relative to the baseline incidence to be compared. because the duration of the risk period had not been defined at the time, the postvaccination interval was found to be too short and resulted in the inclusion in the baseline period of four patients with symptoms more than 6 months after vaccination. the choice of study design to answer a vaccine safety question will depend on the hypothesis to be tested, the available data sources and the extent to which confounding variables are likely to bias the results. the sccs method has now become the gold standard design in vaccine safety studies, owing to the benefits highlighted above, but for each study question the methods should be adapted and potential biases considered in the context of the population under study, the dataset being utilised and the hypothesis being tested. it will inevitably be a trade-off between the ideal and the practical and the best designs will vary according to setting. when many studies are performed to answer the same question, the key to demonstrating causality is consistency in the results from well-designed studies [17] . neurological conditions have a long history of causal associations with vaccination being inferred from temporally related onsets. an example is the damage that was made to the uk whole-cell pertussis vaccination programme in the late 1970s when neurological damage was wrongly attributed to the vaccine based on case reports of infants with onset of encephalopathy shortly after vaccination. these reports of permanent brain damage following vaccination attracted intense and sustained professional and media interest causing vaccination rates to fall from 79% in 1973 to 31% in 1978. following this, three national epidemics of pertussis occurred with an estimated 5000 hospital admissions, 200 cases of pneumonia, 83 cases of convulsions and 38 deaths [18, 19] . neurological vaccine safety concerns can be broadly assigned to either being biologically plausible or unsubstantiated and unexpected. the biologically plausible group are often a direct effect from a component of the vaccine. for example, in the case of a live attenuated vaccine, the adverse reaction could mimic, at a lower frequency, what the non-attenuated wild virus would do. this is demonstrated in the rare risk of acute flaccid paralysis following the oral polio vaccine after a reversion to virulence or with the risk of aseptic meningitis after the attenuated urabe mumps strain in the measles-mumps-rubella vaccine due to retention of some neurovirulent characteristics [20, 21] . the unsubstantiated and unexpected group occurs usually because of the timing of the vaccine, which coincides with the diagnosis of the condition and has no immediate biologically plausible explanation. examples of this are measles-mumps-rubella and autism [22] , gait disturbance and measles-mumps-rubella [23] , and thiomersal and developmental delay [24] . although a signal may not have a clear biological basis for its causation, it still needs to be fully investigated using robust epidemiological methods. neurological diseases for which a causal association with vaccination has been suspected have some common features. first, they are often serious conditions that are rare, second, their aetiology and pathophysiology are poorly understood, and third, immune stimulation is thought to play a role in the pathogenesis of the condition. because vaccines provoke an immune response, albeit targeted to a specific antigen, it can be tempting to invoke a superficially plausible causal pathway when adverse events with a suspected immune aetiology arise shortly after vaccination. universal hepatitis b vaccine was recommended by the world health organization in the early 1990s to protect against the hepatitis b virus, which can cause chronic liver damage and cancer. following this recommendation, france carried out a mass vaccine campaign in 1994. shortly after, reports of cases of multiple sclerosis (ms) with onset or relapse after vaccination were reported, leading to the hypothesis that the vaccine could cause an acute autoimmune reaction in susceptible persons soon after administration. with a lack of adequate background rates of ms in the vaccinated population to put the reported cases into perspective, mistrust in the vaccine soon grew and the vaccine programme was subsequently suspended. a systematic review and meta-analysis by mouchet et al. published in 2018 that included 13 studies with a control group found no evidence of an increased risk. the overall adjusted risk ratios for ms was 1.19 (95% confidence interval [ci] 0.93-1.52) and for central demyelination was 1.25 (95% ci 0.97-1.62) [25] . within the systematic review, there was one study that found a significant association using a primary care database from england [26] . this study was unable to adjust for all risk factors and additionally no routine hepatitis b vaccination programme was in place at the time with most of the vaccine delivered via occupational health departments whose records may not be routinely transferred to primary care databases. france continues to have suboptimal vaccine coverage [27, 28] and has the lowest level of confidence in vaccine safety in europe [29, 30] . this demonstrates the need to have robust methods in place to rapidly respond to such scares because once confidence is lost in a vaccine it is difficult to restore and may generate a more general lack of confidence in vaccine safety. guillain-barré syndrome (gbs) is the most common cause of acute neuromuscular paralysis in the developed world resulting in muscle weakness and sometimes paralysis, which can lead to respiratory failure and a death rate in up to 13% [31] . the strongest evidence of a causal link with a vaccine was obtained during the 1976 us swine influenza vaccine programme in military personnel, which was found to be associated with a risk of one case per 100,000 and resulted in the suspension of the vaccine programme [32] . since then, gbs has been a potential vaccine-associated adverse event of interest particularly for vaccines given in adolescence, an age coinciding with the age at which autoimmune diseases are often diagnosed. 1.20-1.66) . the overall relative risk for gbs after seasonal vaccine was marginally increased at 1.22 (95% ci 1.01-1.48), with a somewhat larger relative risk of 1.84 (1.36-2.50) for the 2009 h1n1 pandemic vaccine but this was not significantly higher than the relative risk for seasonal vaccine [33] . the authors did not find any statistically significant differences by geographical region nor between adjuvanted and unadjuvanted vaccines. an earlier meta-analysis of studies using the sccs method also found a small elevated risk of gbs after the monovalent h1n1 pandemic vaccine, with an ri of 2.42 (95% ci 1.58-3.72) in the 42 days following vaccination [34] . similarly, salmon et al. found an ri of 2.35 (95% ci 1.42-4.01) in a large study in the usa [35] . in contrast, a strong association between gbs and a preceding influenza-like-illness was shown in a study in england using primary care data and the sccs method. no association was seen with influenza vaccine in the 0-90 days after administration (ri 0.76 [95% ci 0.41-1.40]) but a significantly increased risk was found in the 90 days after influenza-like illness (ri 7.35 [95% ci 4.36-12.38]) [36] . these studies show that a small overall risk of gbs after influenza vaccine probably does exist with a slightly larger risk after the 2009 monovalent pandemic vaccine. the mechanism may be multi-factored with the risk varying with the vaccine used, co-circulation of other infections and the inherent susceptibility to developing gbs. however, the small risk that exists does not outweigh the risk of developing gbs after influenza itself. human papilloma virus vaccine is given at an age when autoimmune disorders are often diagnosed. following a french study reporting a signal for gbs after human papilloma virus vaccination, a study was conducted in england identifying gbs cases in a national hospital discharge database (hospital episode statistics) [37] . primary care practitioners were then contacted for the vaccination history and asked to confirm gbs diagnosis and provide an onset date and send supporting documentation. in a selfcontrolled case-series analysis of 101 cases with a record of human papilloma virus vaccination, there were episodes in the 0-to 91-day risk period after any dose with no significant increased risk, ri 1.04 (95% ci 0.47-2.28). the analysis was also stratified by manufacturer (of either the quadrivalent or bivalent product); there was no difference in the ri between products and no significant increased risk for either manufacturer. the pandemic influenza vaccine, pandemrix™, was the most widely used vaccine in europe during the 2009 pandemic. it was a monovalent h1n1 pdm 09 vaccine containing as03, a powerful oil-in-water adjuvant. uptake of the vaccine varied between countries with high coverage of 75% in children in finland [38] and lower coverage in england where children in a risk group eligible for the seasonal influenza vaccine and later all children under 5 years of age were targeted, with uptake being 37% and 24%, respectively. in england, pandemrix™ was also used in the influenza season 2010/11 because of a shortage of seasonal influenza vaccine. in august 2010, concerns were raised in finland and sweden, where vaccine coverage was high, about a possible association between narcolepsy and pandemrix ™ when a large increase in cases of narcolepsy in vaccinated cases was reported by sleep centres [38, 39] . a subsequent cohort study in finland reported a 13-fold increased risk of narcolepsy following pandemrix™ in children aged 4-19 years, the majority of whom had onset within 3 months of vaccination and almost all within 6 months [38, 40] . narcolepsy was a totally unexpected adverse event and the early reports were met with initial scepticism in the global vaccine community. the world health organization global advisory committee in vaccine safety issued a statement in april 2011 stating "no excess of narcolepsy has been reported from several other european states where pandemrix was used" and "it seems likely that some as yet unidentified additional factor was operating in sweden and finland". however, it was unlikely that narcolepsy would be identified by passive surveillance systems in other countries where pandemrix™ coverage was low given the low background incidence of the condition and the complexity and frequent delays in diagnosis. to assess this risk identified in finland, the health protection agency (now public health england) performed a study in sleep centres in england where the majority of children with sleep disorders are seen. this study identified a 14-fold increased risk in those vaccinated with pandem-rix™ [14] with the attributable risk estimated to be 1.9 per 100,000 doses. this demonstrated that even in a country were vaccine coverage was low, the association can be demonstrated using robust epidemiological methods. the study of the relationship between narcolepsy and pandemrix™ has been an epidemiological challenge in terms of identifying the cases and their vaccine histories in a non-biased manner. not only can the diagnosis be lengthy and complex, but admitted patient care databases, which are widely used for a non-biased ascertainment of cases in vaccine safety studies, are incomplete as patients experiencing narcolepsy may not be admitted and if they are admitted, the admission date is not an accurate reflection of the onset of the narcolepsy symptoms leading to misclassification bias. an important consideration when selecting cases is the awareness of the hypothesised association. this awareness may lead to an increased reporting of cases known to be vaccinated and has two aspects; public awareness and professional awareness. first, this heightened awareness may lead to vaccinated individuals presenting to healthcare institutions and being diagnosed earlier than unvaccinated cases leading to ascertainment bias. if a condition has an insidious onset making the recall of the first symptom difficult to determine, media attention may lead to a differential recall of the symptom-onset date in the vaccinated cases. using source documents, which were created prior to any media attention in the country of study, can address this potential recall bias. professional awareness is likely to occur even if media attention is low, as health professionals in the specialty will be aware of current topics of interest through professional bodies and literature. differential misclassification bias will occur if cases known to have been vaccinated are more likely to be assigned a diagnosis of narcolepsy than unvaccinated cases. in the study from england, public awareness of the association was assessed by analysing google searches for "narcolepsy" in the period of interest and found there was little activity in the uk compared to sweden (fig. 2) [14] . even with these practical challenges, there has now been a consistent strong association demonstrated in countries that used pandemrix™ but no association has been seen with other pandemic or seasonal vaccines [17] . as with all vaccine safety studies, but particularly in the case of narcolepsy and pandemrix™ where the association was completely unexpected, the key to demonstrating causality was consistency of results from well-designed studies in different settings. the answer to the question of whether vaccination can cause neurological disease is multifaceted. the evidence does not support an association between ms and the hepatitis vaccine, while for gbs and influenza vaccines the evidence suggests a small increased risk though it is much smaller than the risk from a natural influenza virus infection. the now established association between narcolepsy and pan-demrix™ should act as a lesson for the vaccine safety community that sometimes unexpected but serious conditions can arise and need to be investigated rapidly however biologically implausible. the neurological vaccine safety issues outlined here demonstrate that rapid assessments of safety signals are needed to ensure that public confidence is maintained in national immunisation programmes. the confirmation of a signal and estimation of the magnitude of vaccine-attributable risk will require consistent results from a number of well-designed epidemiological studies, preferably conducted in different settings. as the experience with narcolepsy has shown, not all vaccine safety concerns can be anticipated on the basis of biologically plausible and thus predictable effects. as new vaccines are introduced, the basis of discussions on vaccine safety should be the acceptance that vaccination can carry a small risk but that this risk needs to be balanced against the enormous individual and public health benefits. funding public health england, national infection service, immunisation and countermeasures division has provided vaccine manufacturers with post-marketing surveillance reports, which the marketing authorisation holders are required to submit to the uk licensing authority in compliance with their risk management strategy. a cost recovery charge is made for these reports. world health organization. the power of vaccines: still not fully utilized recall bias, mmr, and autism vaccine safety surveillance postlicensure epidemiology of childhood vaccination: the danish experience control without separate controls: evaluation of vaccine safety using case-only methods statistical assessment of the association between vaccination and rare adverse events post-licensure autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association guillain-barre syndrome and h1n1 (2009) pandemic influenza vaccination using an as03 adjuvanted vaccine in the united kingdom: self-controlled case series idiopathic thrombocytopenic purpura and mmr vaccine the risk of intussusception following monovalent rotavirus vaccination in england: a self-controlled case-series evaluation population-based study of rotavirus vaccination and intussusception intussusception risk and disease prevention associated with rotavirus vaccines in australia's national immunization program mmr vaccine and idiopathic thrombocytopaenic purpura risk of narcolepsy in children and young people receiving as03 adjuvanted pandemic a/h1n1 2009 influenza vaccine: retrospective analysis risk of narcolepsy after as03 adjuvanted pandemic a/ h1n1 2009 influenza vaccine in adults: a case-coverage study in england estimation of vaccine effectiveness using the screening method incidence of narcolepsy after h1n1 influenza and vaccinations: systematic review and meta-analysis pertussis immunisation and control in england and wales, 1957 to 2012: a historical review the pertussis vaccine controversy in great britain risk of aseptic meningitis after measles, mumps, and rubella vaccine in uk children risks of convulsion and aseptic meningitis following measlesmumps-rubella vaccination in the united kingdom autism and mmr vaccination in north london; no causal relationship no evidence of an association between mmr vaccine and gait disturbance thiomerosal exposure in infants and developmental disorders: a retrospective cohort study in the united kingdom does not support a causal association hepatitis b vaccination and the putative risk of central demyelinating diseases: a systematic review and metaanalysis. vaccine recombinant hepatitis b vaccine and the risk of multiple sclerosis: a prospective study estimates of national immunization coverage european centre for disease prevention and control. measles vaccination coverage (second dose the state of vaccine confidence 2016: global insights through a 67-country survey vaccine hesitancy among general practitioners and its determinants during controversies: a national cross-sectional survey in france guillain-barre syndrome guillain-barre syndrome following vaccination in the national influenza immunization program guillain-barre syndrome and influenza vaccines: a meta-analysis international collaboration to assess the risk of guillain barre syndrome following influenza a (h1n1) 2009 monovalent vaccines association between guillain-barre syndrome and influenza a (h1n1) 2009 monovalent inactivated vaccines in the usa: a meta-analysis investigation of the temporal association of guillain-barre syndrome with influenza vaccine and influenzalike illness using the united kingdom general practice research database no increased risk of guillain-barre syndrome after human papilloma virus vaccine: a self-controlled case-series study in england increased incidence and clinical picture of childhood narcolepsy following the 2009 h1n1 pandemic vaccination campaign in finland risks of neurological and immune-related diseases, including narcolepsy, after vaccination with pandemrix: a population-and registry-based cohort study with over 2 years of follow-up as03 adjuvanted ah1n1 vaccine associated with an abrupt increase in the incidence of childhood narcolepsy in finland key: cord-024982-4f6m3kfc authors: che huei, lin; ya-wen, lin; chiu ming, yang; li chen, hung; jong yi, wang; ming hung, lin title: occupational health and safety hazards faced by healthcare professionals in taiwan: a systematic review of risk factors and control strategies date: 2020-05-18 journal: sage open med doi: 10.1177/2050312120918999 sha: doc_id: 24982 cord_uid: 4f6m3kfc background: healthcare professionals in taiwan are exposed to a myriad of occupational health and safety hazards, including physical, biological, chemical, ergonomic, and psychosocial hazards. healthcare professionals working in hospitals and healthcare facilities are more likely to be subjected to these hazards than their counterparts working in other areas. objectives: this review aims to assess current research literature regarding this situation with a view to informing policy makers and practitioners about the risks of exposure and offer evidence-based recommendations on how to eliminate or reduce such risks. methods: using the preferred reporting items for systematic reviews and meta-analyses review strategy, we conducted a systematic review of studies related to occupational health and safety conducted between january 2000 and january 2019 using medline (ovid), pubmed, pmc, toxline, cinahl, plos one, and access pharmacy databases. results: the review detected 490 studies addressing the issue of occupational health and safety hazards; of these, 30 articles were included in this systematic review. these articles reported a variety of exposures faced by healthcare professionals. this review also revealed a number of strategies that can be adopted to control, eliminate, or reduce hazards to healthcare professionals in taiwan. conclusion: hospitals and healthcare facilities have many unique occupational health and safety hazards that can potentially affect the health and performance of healthcare professionals. the impact of such hazards on healthcare professionals poses a serious public health issue in taiwan; therefore, controlling, eliminating, or reducing exposure can contribute to a stronger healthcare workforce with great potential to improve patient care and the healthcare system in taiwan. eliminating or reducing hazards can best be achieved through engineering measures, administrative policy, and the use of personal protective equipment. implications: this review has research, policy, and practice implications and provides future students and researchers with information on systematic review methodologies based on the preferred reporting items for systematic reviews and meta-analyses strategy. it also identifies occupational health and safety risks and provides insights and strategies to address them. according to the world health organization (who), 1 an estimated 59 million people work in healthcare facilities globally, accounting for roughly 12% of the working population. the who 2 also reports that all healthcare workers, including healthcare professionals, are exposed to occupational hazards. the international labour organization (ilo) 3 reported that millions of healthcare workers suffer from work-related diseases and accidents, and many succumb to occupational hazards. scholars and practitioners in the field of healthcare and occupational health and safety (ohs) are striving to raise awareness of the risk factors and importance of workplace health and safety among this population. 1, 3, 4 schulte et al. 5 defined an occupational hazard as the shortterm and long-term dangers or risks associated with unhealthy workplace environments. tullar et al. 6 and joseph and joseph 7 stated that the healthcare workers at greatest risk are doctors, healthcare professionals, nurses, laboratory technicians, and medical waste handlers. occupational hazards pose health and safety risks and have negative impact on the economy, which accounts for roughly a 4% loss in global annual gross domestic product (i.e. $2.8 trillion annually). 3 the who, 2 ilo, 3 and nelson et al. 8 noted a lack of universally applicable data on the impact of occupational hazards. ohs hazards, and their negative impacts on health and well-being among healthcare professionals, is an issue of growing concern in the asia and pacific region, particularly in taiwan; however, research in this area has been somewhat limited. according to the taiwanese ministry of health and welfare (mohw) 9 in taiwan, 182,019 health and medical personnel are working at health care organizations in taiwan, including 33,516 healthcare professionals and 15,016 pharmacist assistants. the healthcare professionals serve a taiwanese population of 23,590,744 in 22,384 medical care institutions (490 hospitals and 21,894 clinics). 10 of the 490 hospitals, 81 are public and 409 are privately owned; of the 21,894 clinics, 440 are public and 21,454 are privately owned. 10 taiwanese healthcare professionals face a variety of ohs hazards, which increase the incidences of work-related disease, the country's burden of disease, the total number of accidents, the incidences of job-related health problems, and the number of cases involving incapacitation or disablement. 9 this study reviewed previous works on ohs hazards, as well as their risk factors and control strategies, with a focus on healthcare professionals in taiwan. cochrane 11 identified eight steps of a systematic review, which are adopted in this study. this study employed the preferred reporting items for systematic reviews and meta-analyses (prisma) protocol to organize the flow of information through the various steps of the review. we used the following key words in our literature search: occupational health and safety, risk factors, healthcare professionals, control strategies, and taiwan to ensure specificity and exclude irrelevant studies, we employed boolean logic (and, or, not) in combining terms as search strings. 12 the operator and was used to reduce the search yield for two key terms (e.g. "healthcare professionals (p) and occupational health and safety"). the operator or was used to increase the search yield (e.g. "healthcare professionals and occupational health and safety or risk factors"). note that in this example, the two search terms are synonyms. the operator "not" was used to exclude specific terms or term combinations. 13 this research obtained a large number of initial articles (n initial = 490); however, the application of inclusion and exclusion criteria considerably reduced the number of articles for inclusion in the review (n = 30 articles). the 30 articles focused on ohs, occupational hazards, and healthcare professionals in taiwan. figure 1 presents a flow diagram depicting the application of eligibility criteria, the process of identification and screening, and the reasons for inclusion and exclusion. in documenting and assessing individual publications, we collected key information from the relevant studies to populate an evidence table (see appendix c) and conducted a critical appraisal of the included studies. 12 the study population included adult pharmacy workers (male and female). data were extracted only from studies that included samples that were deemed significant given the justification of the authors of the studies. a critical appraisal of all studies was performed to assess their quality in terms of validity and reliability, as based on performance bias, information bias, selection bias, and detection bias. cochrane 11 and khan et al. 16 reported that biases tend to exaggerate or underestimate the "true" outcome of exposure to an occupational hazard. our ultimate objective was to compare (without any form of bias) groups that were exposed to occupational hazards and those that were not exposed in terms of risk factors and outcomes. 16 for the sake of validity and reliability, all of the studies selected for inclusion were prospective in nature and included data pertaining to exposure and outcomes, while controlling confounding factors. we also looked for studies with high internal reliability (consistency across items within a test) and high external reliability (consistency in agreeability between uses/rates). 12 in our final analysis, we considered whether the research had been conducted in an appropriate manner (internal validity). 13 we also considered the generalizability of the results, that is, whether the results were pertinent to other situations (external validity). data synthesis. the final step involved the synthesis of evidence from the included studies; that is, organized into homogeneous categories, under which the results were to be summarized. the evidence was also graded (i.e. assessed in terms of quality) and integrated (i.e. weighted across categories to address the multidisciplinary nature of ohs research). 12 in this review, the synthesis, grading, integration, interpretation, and summary of the evidence were presented in narrative form, due to difficulties in textual and statistical pooling. after completing our systematic review, we employed the prisma reporting scheme, which is endorsed for ohs studies by hempel et al. 12 briefly, the prisma structure is laid out in the following format: topic, summary/abstract, introduction, methods, results, conclusion, and recommendations. 12 a meta-analysis was not conducted. the ilo categorizes ohs hazards that affect healthcare professionals as biological, chemical, physical, ergonomics, and psychosocial. 17 from the 30 studies in this review, this study identified the ohs hazards, injuries, and diseases affecting healthcare professionals working in hospitals and healthcare facilities. this section provides the biological hazards, as identified in the review, as the most commonly encountered in hospitals and healthcare facilities in taiwan. according to who, the managers and administrators of hospital and healthcare facilities, in our case those in taiwan, should carefully assess the potential for exposure to biohazards and put effective biohazard control plans in place. the following chart provides a summary of the identified biological hazards, their risk factors, and control strategies (table 1) . the review established some of the most commonly faced chemical hazards present in hospitals and healthcare immunization and vaccines; 18 and biological safety cabinets, needleless systems or safety-engineered needles, suitable ventilation, and an appropriate medical waste management system. 15 administrative controls: written and documented infection control plans; decontamination procedures; enforcement of these systems; and the training of hospital staff in the implementation of occupational health and safety measures. 20 immunization programs; detection and followup of infections; periodic screening; codes of practice; and staff orientation. designing all work systems with the aim of minimizing the risk of exposure. personal protective equipment (ppe): includes devices for the protection of the eyes (e.g. face shields, goggles), respiratory system (e.g. surgical masks), and skin (e.g. latex gloves, protective aprons, gown. 20 based on risk assessments and careful training. [21] [22] [23] infection from human immunodeficiency virus (hiv), hepatitis b virus (hbv), and hepatitis c virus (hcv) 14 needle-stick injuries (nsi) and accidents with other sharp objects: occupational exposure resulting in hiv, hbv surface antigen-positive, or hcv transmission is largely due to inoculation of pathogens into cutaneous abrasions, lesions, scratches, or burns, as well as mucocutaneous exposure involving inoculation or accidental splashes onto non-intact mucosal surfaces of the nose, mucous membranes, mouth, or eyes. 24 facilities, as well as the documented control strategies, which are summarized in table 2 . physical hazards, which are defined as environmental risk factors that can harm the body without contact, were found to account for a substantial proportion of risks among healthcare professionals in taiwan. 4, [42] [43] [44] the physical hazards, risk factors, and control strategies are summarized in table 3 . the review established that healthcare professionals are exposed to musculoskeletal disorders and injuries, such as low back pain due to the nature of their work, such as lifting patients. 44 table 4 summarizes the risk factors and control strategies for this hazard. psychosocial hazards have attracted considerable attention in the research community, as well as among policy makers and practitioners in healthcare. [53] [54] [55] this study found that in taiwan, psychosocial hazards have prompted a larger number of studies combining physical, chemical, and biological hazards. the who 56 reported that psychosocial hazards are closely linked to work-related stress, workplace violence (e.g. violent patients), and other workplace stressors. table 5 provides a summary of the risk factors and control strategies of psychosocial hazards. this review provides detailed information regarding the ohs hazards that affect healthcare professionals working in hospitals and healthcare facilities in taiwan. the review summarizes the risk factors for hazards, as well as the control strategies to control, eliminate, or reduce them. from the reviewed studies, it was clear that ohs hazards can potentially result in a number of injuries, sickness, and harm. a wide range of ohs hazards were identified, including biological hazards 14 chemical hazards, 65 ergonomic hazards, psychosocial hazards, and physical hazards. 59, 62 the review has shown that healthcare professionals are at a significantly high risk of occupational related hazards. 56 injuries and sickness prevent healthcare workers from discharging their duties effectively, which can have negative impact on the overall healthcare system in taiwan. physical hazards, such as falls, noise, and mechanical hazards, could have long-term physiological effects, such as hearing impairments; therefore, there is need to introduce various control strategies, such as engineering noise control measures. there should be the provision of good ppe for healthcare professionals to protect themselves from physical harms in the workplace. according to our findings, it is evident that healthcare professionals are exposed to chemical hazards, some of which can be carcinogenic. there is also the risk of exposure to occupational dermatitis. it is therefore important that healthcare professionals are screened for cancer on a regular basis. the workers can also be trained about skin care and be provided with safety equipment and other useful interventions, such as sunscreen cream. such efforts can help in early detection, prevention, and intervention. as part of their routine occupation, biological hazards can affect healthcare professionals due to contact with patients and visitors. the review of healthcare professionals on duty demonstrates how important it is to manage blood borne and airborne biological pathogens in the healthcare workforce. 20 there should be administrative guidance and training on how healthcare professionals can deal with biological hazards, and these professionals should be encouraged to report work-related incidents as soon as they occur or are suspected to have occurred to aid early intervention. ergonomic hazards in healthcare professionals tend to arise from lifting patients and hospital equipment. this requires careful prevention, assessment, and intervention, as the impact of ergonomic hazards on the musculoskeletal system of the affected healthcare professionals cannot be ignored. 34 hospital administrators need to alleviate frequent job pressures by providing the necessary safe and ergonomic equipment, and hiring an adequate number of personnel. the professionals can work in properly planned shifts and teams to reduce fatigue, they should be trained in the correct techniques for lifting patients and equipment, and policies should be enforced to ensure compliance. the findings on psychosocial hazards show that healthcare professionals can be affected by mental and psychological hazards, such as stress, as it is evident that healthcare professionals who suffer from stress are likely to suffer from fatigue and exhaustion. healthcare professionals are trained to show less emotion, and thus, find it difficult to seek medical intervention. there is need for counseling and stress management for healthcare professionals, and the workers should be trained to manage stress. the workplace should be designed in such a manner as to prevent invasion, harassment, and violence against healthcare professionals. overall, hospital administrations and healthcare professionals should focus on evidence-based strategies (engineering, administrative, and ppe) to manage ohs hazards. the increasing prevalence of occupational hazards and work-related diseases among healthcare professionals in taiwan is a concern. 66 risk factors include exposure to hazards and a failure to follow hierarchical control strategies. health care workers and administrators must work together to eliminate or minimize these hazards through the introduction of and strict adherence to engineering, administrative, and personal protective equipment (ppe) controls. the the main routes of exposure to chemical hazards include ingestion, injection, skin contact or absorption, and inhalation. 34, 35 contamination and exposure are both affected by the duration and frequency of exposure, the quantity of drugs undergoing preparation, and the use of ppe. 23 the adverse health effects can be attributed to compounds deemed carcinogenic (cancer causing), mutagenic (promoting mutations), teratogenic (causing birth defects), or toxic to various organs. 36 alcohol hand sanitizers commonly used by healthcare professionals are flammable and harmful to the skin. there have also been reports on the dangers of detergents used to clean surfaces, which can lead to irritation and promote allergies of the skin, eyes, and respiratory tract. 35, 37 there is also evidence that some detergents can react with other products commonly stocked in healthcare facilities to produce toxic vapors. 31, 35, 38 it has been found that low concentration disinfectants, such as quaternary ammonium salts, alcohols, hydrogen peroxide, iodophors, and phenolic and chlorine compounds, can have toxic effects and irritate the skin, eyes, and respiratory system. 23 the inhalation of powdered medications and vapors exposes healthcare professionals to the risk of poisoning and allergic reactions. 39, 40 engineering control strategies: isolating and segregating hospital or healthcare facility areas and equipment; providing exhaust hoods to provide local ventilation when compounding and mixing drugs; providing biological safety cabinets to safeguard chemicals; and providing containers to prevent needle stick injuries. flammable chemicals should be stored away from sources of ignition and dangerous chemicals substituted with less harmful ones. 36 cuts, burns, hearing loss, motion sickness, and muscle cramps. 47 engineering controls: minimize the use of sharp tools, use machine guarding, use quality sockets, and close water faucets when not in use. 48 administrative controls: promote and practice safe work procedures, such as when using electrical equipment (e.g. cords). 18 educating workers about the cleaning equipment and cleaning up broken glass is also recommended. 49 ppe: use of appropriate footwear, gloves, eye and nose protection, and protective clothing 18 3. tripping, slipping, cuts, and falling poor housekeeping, poor layout, and slippery tiled floors. 50 open power cables, live wires, broken glassware, lancets, knives, scissors, and scalpels. 47 bruised skin, cuts, broken bones, and muscular injuries. 50 engineering control: proper lighting, the construction of safe stairwells, and regular building maintenance (e.g. floors and workspaces). 44 ppe: use of appropriate footwear, gloves, eye and nose protection, and protective clothing 18 4. exposure to microwave radiation, and ionizing and non-ionizing radiation. 50 risks imposed by radiation from x-ray machines and other diagnostic imaging systems, and the radionuclides used in nuclear medicine and radiation therapy. workers face risks from nonionizing radiation, lasers, ultraviolet rays, and magnetic resonance imaging. 51 the risk increases when using heat sealers and poorly maintained or insulated radio-diagnostic equipment. 48 tissue damage, risk of cancer, and abnormal cell mutation (e.g. abnormal leukocytes). 48,51 engineering control: reducing the time of exposure, increasing the distance to x-ray machines, and increasing the amount of shielding. 20 ppe: use of appropriate footwear, gloves, eye and nose protection, and protective clothing 18 perceptions of workers can greatly affect their implementation of risk-mitigation strategies. 20 selection bias is a concern here, despite the fact that we selected published and peer-reviewed articles, as well as unpublished but authoritative gray articles; the fact is that other unverifiable but potentially valuable reports were no doubt excluded. 67 our reliance on observational studies (to the exclusion of intervention studies) and the heterogeneity of the included articles (in terms of methodology) posed a risk of bias and limited standardization. 68 this study discovered relatively little research focusing on hospital workers in taiwan, and thus, further empirical studies focusing on this group of healthcare givers are required and recommended. 68 researchers should focus on the health status, work performance, and workplace retention of healthcare professionals, including the prevalence of morbidity and mortality. 67 the insights in this review provide a valuable reference for policy makers in establishing goals to deal with workplace hazards. 68 hazard control strategies must be based on objective assessments of existing risks and the most appropriate measures to deal with them. 20 this systematic review confirmed a positive correlation between ohs hazards (biological, physical, chemical, and psychosocial), and work-related injuries, occupational health problems, and work-related diseases. the burden of disease and attributable fraction of work-related diseases and occupational injuries has been shown to cause considerable social and economic losses for employees, families, companies, countries, and societies at large. 8 generally, the burden of disease is assessed using disease/disability adjusted life years. the burden of disease is measured as the impact of morbidity and premature mortality within a given area. 2, 69 scholars and professionals agree that reducing, substituting, or eliminating ohs hazards in healthcare facilities is important for healthcare workers, helps to ensure patient safety, and enhances the overall quality of healthcare. 7 many researchers have used the "hierarchy of controls," which is based on the assumption that interventions are most effective when implemented at the source and least effective when applied at the worker level. 20 gorman et al. listed control interventions from most to least effective as follows: elimination, substitution, engineering, administrative, and ppe. researchers have also emphasized the importance of eliminating hazards or substituting hazardous materials with less hazardous materials. 20, 70 taimela et al. 71 argued that administrative controls, such as training and ensuring adequate staffing, are crucial to eliminating or minimizing occupational hazards. engineering controls, such as redesigning work spaces, ensuring adequate ventilation, and introducing automated systems for repetitive tasks, were emphasized by liberati et al. 72 ppe, such as the use of gloves, clothing, and eye wear, are considered the least effective and have the most profound consequences in the event of failure by exposing the individual directly to the hazard. 20 nonetheless, many researchers and professionals agree that all such controls should be applied collectively, in order to minimize the effects of hazards. 20,70-72 musculoskeletal disorders (msds) due to repetitive actions, less than optimal computer equipment, and a poorly engineered workspace in which healthcare professionals are forced to overreach and/or sit while maintaining an awkward posture. 43 healthcare professionals are tasked with lifting and transferring equipment, tools, and instruments. one's physical fitness level and demographic background were shown to affect the risk of developing msds. 52 workplace and job-related demands, poor administrative and team support, and a negative attitude toward job tasks were all strongly correlated with msds. 47 ergonomic hazards can lead to chronic pain in the arms, back, or neck. frequently, they lead to msds, such as carpel tunnel syndrome, which tends to reduce work performance and productivity and can have a serious detrimental effect on one's health-related quality of life. 50 strained movement due to localized pain, stiffness, sleep disturbances, twitching muscles, burning sensations, and feelings of overworked muscles. 47 engineering control strategies: redesign workstations with appropriate chairs and computer equipment. 43 workstations should be configurable to a wide range of medical personnel with different body shapes and sizes. it is also recommended that lifting and handling equipment, such as trolleys, be installed in areas requiring heavy lifting. automation should be adopted when resources and practicability allow. 46 59 healthcare professionals also face violence during robberies and the theft of addictive prescription pain killers, such as oxycontin and vicodin. 54 we also identified organizational culture and structure, interpersonal relationships at work, job content and satisfaction, homework balance, and the changing nature of work as important psychosocial hazard risk factors among healthcare professionals. 54, 57, 60 work-related stressors have a detrimental impact on worker's health and safety, in terms of mental, musculoskeletal, chronic degenerative disorders, metabolic syndrome diabetes, and cardiovascular diseases. 61 psychological hazards at work were associated with heart disease, depression, physical health problems, and psychological strain. 54 low back pain was the most common work-related ailment among healthcare workers in taiwan. 53 employees who experience job insecurity and/or workplace injustice were more likely to suffer from burnout. 54 job demands and the level of control experienced by the worker were significantly associated with fatigue; exposure to workplace violence affects psychological stress, sleep quality, and subjective health status among healthcare professionals. 59 engineering control strategies: creation of isolation areas for agitated patients and designing an office layout that prevents the healthcare professionals from coming into direct contact with customers/patients or being trapped. 57 spaces should be well lit and separated to ensure that client-care provider contact is controlled and access is allowed only when absolutely necessary. proper working communication devices and video surveillance, as well as panic buttons and alarm systems. 62 administrative control: management policies make unequivocal declarations of non-violence/anti-abuse. 63 management can encourage workers to participate in the design of forwardrotating (day-evening-night) shifts and work schedules that impose gradual shift changes and ease the adaptation to nonregular work shifts to ensure that all concerned get adequate sleep. 61 educate healthcare professionals about the risks associated with shift work. 20 well-trained security personnel should be hired to deal with unruly customers. 59 training in conflict management and problem-solving could also help workers to prevent or de-escalate violence. 60, 64 nametags should be used by employees, and reporting and response procedures should be enhanced. the manuscript has not previously been published and is not under consideration by another journal. the author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. the ethical approval was not sought for this study because this is a systematic review and all the literature has been published. the author(s) received no financial support for the research, authorship, and/or publication of this article. lin ming hung https://orcid.org/0000-0002-7798-826x supplemental material for this article is available online. occupational health: health workers occupational health: data and statistics international labour standards on occupational safety and health workplace safety and health: healthcare workers interaction of occupational and personal risk factors in workforce health and safety occupational safety and health interventions to reduce musculoskeletal symptoms in the health care sector the health of the healthcare workers the global burden of selected occupational diseases and injury risks: methodology and summary national development council (ndc) what is a systematic review systematic reviews for occupational safety and health questions: resources for evidence synthesis a search strategy for occupational health intervention studies risk and management of blood-borne infections in health care workers the occupational safety of health professionals working at community and family health centers five steps to conducting a systematic review sleep disorder in taiwanese nurses: a random sample survey safety culture in a pharmacy setting using a pharmacy survey on patient safety culture: a cross-sectional study in china science of safety topic coverage in experiential education in us and taiwan colleges and schools of pharmacy controlling health hazards to hospital workers perception and prevalence of work-related health hazards among health care workers in public health facilities in southern india the prevalence of occupational health-related problems in dentistry: a review of the literature workplace safety and health improvements through a labor/management training and collaboration tuberculosis in healthcare workers: a matched cohort study in taiwan health care visits as a risk factor for tuberculosis in taiwan: a population-based casecontrol study estimation of the risk of bloodborne pathogens to health care workers after a needlestick injury in taiwan epidemiological profile of tuberculosis cases reported among health care workers at the university hospital in vitoria, brazil risk of tuberculosis among healthcare workers in an intermediate-burden country: a nationwide population study risk of tuberculosis infection and disease associated with work in health care settings sars in healthcare facilities reproductive health risks associated with occupational exposures to antineoplastic drugs in health care settings: a review of the evidence overview of emerging contaminants and associated human health effects guidelines for safe handling of hazardous drugs: a systematic review critical care medicine in taiwan from 1997 to 2013 under national health insurance niosh health and safety practices survey of healthcare workers: training and awareness of employer safety procedures potential risks of pharmacy compounding development of taiwan's strategies for regulating nanotechnology-based pharmaceuticals harmonized with international considerations an overview of the healthcare system in taiwan chemical and biological work-related risks across occupations in europe: a review n-hexane intoxication in a chinese medicine pharmaceutical plant: a case report occupational neurotoxic diseases in taiwan the impact of physical and ergonomic hazards on poultry abattoir processing workers: a review musculoskeletal disorders and ergonomic hazards among iranian physicians occupational safety and related impacts on health and the environment prevalence of workplace violent episodes experienced by nurses in acute psychiatric settings occupational hazards in the thai healthcare sector prevalence of work related musculoskeletal disorders (wmsds) and ergonomic risk assessment among readymade garment workers of bangladesh: a cross sectional study the study of the effects of ionizing and non-ionizing radiations on birth weight of newborns to exposed mothers healthcare worker safety: a vital component of surgical capacity development in low-resource settings comparisons of musculoskeletal disorders among ten different medical professions in taiwan: a nationwide, population-based study occupational exposure to ionizing and non-ionizing radiation and risk of glioma effect of systematic ergonomic hazard identification and control implementation on musculoskeletal disorder and injury risk the impact of occupational psychological hazards and metabolic syndrome on the 8-year risk of cardiovascular diseases-a longitudinal study employment insecurity, workplace justice and employees' burnout in taiwanese employees: a validation study risks of treated anxiety, depression, and insomnia among nurses: a nationwide longitudinal cohort study occupational health: occupational and work-related diseases tackling psychosocial hazards at work violence against health workers in family medicine centers impact of workplace violence and compassionate behaviour in hospitals on stress, sleep quality and subjective health status among chinese nurses: a cross-sectional survey the association between jobrelated psychosocial factors and prolonged fatigue among industrial employees in taiwan psychosocial factors and workers' health and safety psychosocial hazard analysis in a heterogeneous workforce: determinants of work stress in blue-and white-collar workers of the european steel industry an evaluation of the policy context on psychosocial risks and mental health in the workplace in the european union: achievements, challenges, and the future a national study on nurses' exposure to occupational violence in lebanon: prevalence, consequences and associated factors review of the literature on determinants of chemical hazard information recall among workers and consumers prevalence and determinants of workplace violence of health care workers in a psychiatric hospital in taiwan a brief overview of systematic reviews and meta-analyses maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources the global burden of occupational disease hazard identification, risk assessment, and control measures as an effective tool of occupational health assessment of hazardous process in an iron ore pelletizing industry an occupational health intervention programme for workers at high risk for sickness absence. cost effectiveness analysis based on a randomised controlled trial learning from high risk industries may not be straightforward: a qualitative study of the hierarchy of risk controls approach in healthcare key: cord-004060-nxw5k9y1 authors: zhang, yewu; wang, xiaofeng; li, yanfei; ma, jiaqi title: spatiotemporal analysis of influenza in china, 2005–2018 date: 2019-12-23 journal: sci rep doi: 10.1038/s41598-019-56104-8 sha: doc_id: 4060 cord_uid: nxw5k9y1 influenza is a major cause of morbidity and mortality worldwide, as well as in china. knowledge of the spatial and temporal characteristics of influenza is important in evaluating and developing disease control programs. this study aims to describe an accurate spatiotemporal pattern of influenza at the prefecture level and explore the risk factors associated with influenza incidence risk in mainland china from 2005 to 2018. the incidence data of influenza were obtained from the chinese notifiable infectious disease reporting system (cnidrs). the besag york mollié (bym) model was extended to include temporal and space-time interaction terms. the parameters for this extended bayesian spatiotemporal model were estimated through integrated nested laplace approximations (inla) using the package r-inla in r. a total of 702,226 influenza cases were reported in mainland china in cnidrs from 2005–2018. the yearly reported incidence rate of influenza increased 15.6 times over the study period, from 3.51 in 2005 to 55.09 in 2008 per 100,000 populations. the temporal term in the spatiotemporal model showed that much of the increase occurred during the last 3 years of the study period. the risk factor analysis showed that the decreased number of influenza vaccines for sale, the new update of the influenza surveillance protocol, the increase in the rate of influenza a (h1n1)pdm09 among all processed specimens from influenza-like illness (ili) patients, and the increase in the latitude and longitude of geographic location were associated with an increase in the influenza incidence risk. after the adjusting for fixed covariate effects and time random effects, the map of the spatial structured term shows that high-risk areas clustered in the central part of china and the lowest-risk areas in the east and west. large space-time variations in influenza have been found since 2009. in conclusion, an increasing trend of influenza was observed from 2005 to 2018. the insufficient flu vaccine supplements, the newly emerging influenza a (h1n1)pdm09 and expansion of influenza surveillance efforts might be the major causes of the dramatic changes in outbreak and spatio-temporal epidemic patterns. clusters of prefectures with high relative risks of influenza were identified in the central part of china. future research with more risk factors at both national and local levels is necessary to explain the changing spatiotemporal patterns of influenza in china. influenza is associated with notable mortality and morbidity worldwide, as well as in china 1-3 . the behaviours of major epidemics and pandemics of influenza were complicated due to dramatic genetic changes, subtype circulation, wave patterning and virus replacement 4 . influenza vaccination is the most effective means to prevent infection, severe disease and mortality 5 . the world health assembly recommends vaccinating 75% of key risk groups against influenza 6 . although seasonal influenza vaccination was introduced in 1998, influenza vaccination is not yet included on the national immunization program (nip) in china 7 . the average national vaccination coverage was reported to be just 1.5-2.2% between 2004 and 2014 7, 8 . the overall number of flu vaccines approved for sale by china's national institute for food and drug control (nifdc) has decreased in recent years 9, 10 . the low coverage rate and reduction in flu vaccine supplementation have raised much concern about the increased risk of influenza incidence in china. although new emerging influenza virus types and subtypes, such as avian influenza a h5n1 [11] [12] [13] [14] , influenza a (h1n1)pdm09 [15] [16] [17] , and influenza a h7n9 18, 19 , have been reported continuously in china, the disease burden of influenza has been dominated by a(h3n2), a(h1n1)pdm2009 influenza viruses, pre-pandemic a(h1n1) or influenza b in recent years, which account for the majority of cases 20 . the influenza a(h1n1)pdm2009 virus was first introduced to mainland china on may 9, 2009 21 , and has been one of the dominant viruses in the seasonal influenza epidemics since then 20 . the effect of newly emerging influenza a(h1n1)pdm2009 viruses on the geographic patterns and temporal trends of influenza across the whole country is still unknown. covariates associated with the reported incidence cases of influenza. the table 2 . the crude odds ratios (ors) and adjusted ors in both the univariate poisson models and multivariate adjusted poisson model are statistically significant. after adjusting for other covariates, a spatially unstructured random effect term (v i ), a spatially structured conditional autoregression term (υ i ), a first-order random walk-correlated time variable (γ 1j ), and an interaction term for time and place (δ ij ) in the multivariate adjusted spatiotemporal model, the flu vaccines (per million doses), flu surveillance protocols, rate of influenza a (h1n1)pdm09, latitude and longitude still remain statistically significant. holding all other covariates to zero and adjusting for spatiotemporal variation, every one million increase in the number of influenza vaccines for sale approved by the china food and drug administration was associated with a 12.7% decrease in the influenza incidence risk (95% ci = 0.825-0.923). similarly, the new update of the influenza surveillance protocol in 2017 was related to a 65.6% increase in the influenza incidence risk (95% ci = 1.097-2.496) compared to the protocol used in 2005 to 2008. for every 10% increase in the rate of influenza a (h1n1)pdm09 among all processed specimens from ili patients, there was a 19.5% increase in the influenza incidence risk (95% ci = 1.005-1.413). every one degree increase in the latitude and longitude was associated with a 1.5% (95% ci = 0.980~0.991) and 0.2% (95% ci = 0.997~0.999) increase in the influenza incidence risk, respectively. the spatial and temporal effects in spatiotemporal models with covariates. the spatial effects. the map of the spatially structured conditional autoregression term demonstrated areas of spatial patterning and similarity among prefectures. the spatially structured relative risk and posterior probabilities of spatially structured relative risk greater than 1.0 are presented in figs. 3 and 4, respectively. table 1 . deviance information criterion (dic) for five spatiotemporal models. abbreviations: d, posterior mean of the deviance; pd, the number of effective parameters; dic, the deviance information criterion, as a measure of the trade-off between model fit and complexity. note: model terms used in four models include an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); uncorrelated time (γ j ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ 1j ). θ ij represents the relative risk of area i at time j. * model 1, convolution + uncorrelated time (time iid), e.g., θ α ν table 2 . risk analysis of covariates associated with reported cases of influenza. abbreviations: or, odds ratio; ci, confidence interval. * univariate poisson analysis models. ** multivariate adjusted poisson analysis model, which included all variables in the univariate analysis models. † multivariate adjusted spatiotemporal models, which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). ‡ total number of flu vaccines approved for sale by china's national institute for food and drug control (nifdc), which were rescaled to one million doses as one unit. data were collected from nifdc. # the convolutional spatial risk term, which includes both the spatially structured conditional autoregression term (υ i ) and the spatially unstructured random effect term (ν i ) at the prefecture level, identified areas at increased risk of influenza throughout the 14-year study period (fig. 5) . posterior probabilities for an area's spatial risk estimate exceeding 1.0 are presented in fig. 6 . the proportion of the total spatial heterogeneity explained by the spatially structured conditional autoregression term was 73.51%. after adjusting for fixed covariate effects and time random effects, both the map of the spatial structured term and the convolutional spatial term show that high-risk areas clustered in the central part of china and the lowest-risk areas in the east, northwest and southwest. the higher-risk prefectures were mostly distributed in guangdong, guangxi, guizhou, hunan, jiangxi, zhejiang, hubei, anhui, henan, hebei, beijing, tianjin, gansu, ningxia, and inner mongolia. the lower-risk areas in the east included some prefectures in the shandong peninsula and the prefectures of heilongjiang, liaoning, and jilin provinces in the northeast. the northwest areas are composed of prefectures in tibet, qinghai and xinjiang, while the southwest areas include chongqing and prefectures in sichuan and yunnan provinces. the temporal trend. the relative risks of the 14-year study period, holding the covariates and spatial risk constant, were calculated by exponentiating the marginal first-order random walk-correlated time term (γ 1j ) in the spatiotemporal models of influenza risk with and without covariates. for the spatiotemporal model without . ** adjusted by convolutional spatial term, space-time interaction term, and covariates, e.g., . figure 3 . map of the spatially structured relative risk ( υ e i ), spatiotemporal model of influenza incidence risk with covariates, china prefectures, 2005-2018. note: the linear terms in the model of spatiotemporal model of influenza incidence risk with covariates were , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). www.nature.com/scientificreports www.nature.com/scientificreports/ covariates, an overall increasing trend was found in the temporal trend term in the 14-year study period. the risk of influenza remained low between 2005 and 2008. a steep increase was observed in 2009. it dropped slightly back to a low level and remained stable in 2010 and 2011. a rapid increase was obvious in the last 3 years (table 3) (fig. 7) . for the temporal trend term in the spatiotemporal model with covariates, the relative risks in the years from 2005 to 2016 were not significantly different from that in the spatiotemporal model with covariates. the relative risks in the model with covariates in 2017 and 2018 were significantly lower than those in the model without covariates. the lower boundary of the 95% confidence intervals in the model with covariates showed some levelling off in recent years. the differences between the spatiotemporal model with and without covariates indicated that the recent increases in influenza incidence risks could be partially explained by the fixed covariate effects. space-time interactions. the probability exceedances for the yearly space-time interactions are presented for the study period (fig. 8) . these identify areas with residual spatial risk greater than 1.0 compared to the prefecture-wide risk after the fixed effects, unstructured, spatially structured, and time random effects are held constant. changing patterns and large variations among the yearly specific spatial distributions are shown in fig. 8 . it is interesting that most of the higher-risk areas were western areas of china before 2009, and most of the higher-risk areas are eastern or northern areas of china after 2009. , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). (2019) 9:19650 | https://doi.org/10.1038/s41598-019-56104-8 www.nature.com/scientificreports www.nature.com/scientificreports/ based on the incidence data of influenza gained from the chinese notifiable infectious disease reporting system, we used the bayesian spatiotemporal model in this study to assess the space-time patterns of the influenza epidemic at the prefecture level in mainland china from 2005 to 2018 and explored several factors that may be associated with the changing spatial and temporal patterns in the influenza incidence risk. several potential factors may be associated with the rapid increasing trend of influenza in china. first, insufficient flu vaccine supplements and a low uptake rate might be associated with an increase in influenza incidence. the results of the final spatiotemporal model showed that every million increase in the number of influenza vaccines approved for sale by the china food and drug administration was associated with a 12.7% decrease in the influenza incidence risk (95% ci = 0.825-0.923). the rapidly increased crude rates of influenza from 2016 to 2018 coincided with a large reduction in the numbers of vaccines approved for sale at the same time. the reductions in the numbers of vaccine supplements were mostly due to the outcomes of vaccine scandals related to improper vaccine storage and production in 2016 and 2018, respectively 9,10,37 . previous studies reported that uptake figures of the influenza vaccine averaged 1.9% nationally and 4.3% among urban elderly aged 60 years and above in 9 cities of china during the 2008-2009 and 2011-2012 influenza seasons, respectively 7, 8, 20 . it is expected that the uptake may be even lower, as people lost their faith in the safety of domestically produced vaccines after the vaccine scandals in china 38 . our results are consistent with the study in italy, which reported an association between vaccination coverage decline and influenza incidence among italian elderly 39 . , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). www.nature.com/scientificreports www.nature.com/scientificreports/ second, currently circulating influenza strains in humans include influenza a (h1n1)pdm09, influenza a (h3n2) and influenza b viruses, (b/victoria and b/yamagata) 5, 40, 41 . influenza a (h1n1)pdm09 has been reported to be the predominant subtype in recent years according to ili surveillance and is more likely to be the major cause of regional and widespread outbreaks 40 . our study showed that for every 10% increase in the rate of influenza a (h1n1)pdm09 among all processed specimens from ili patients, there was a 19.5% increase in the influenza incidence risk (95% ci = 1.005-1.413). shu et al. reported that the predominant subtype of seasonal influenza a (h1n1) and b/yamagata could circulate from the south to the north of china from 2006 to 2009 34 . our study also found that every one degree increase in latitude and longitude was associated with a 1.5% (95% ci = 0.980~0.991) and 0.2% (95% ci = 0.997~0.999) increase in the influenza incidence risk, respectively. this result was consistent with the role of climatic factors in influenza transmission dynamics 20, 42 . third, the greater effort in influenza surveillance and the use of new technologies may account for the rise in influenza incidence. in recent years, especially after the 2009 pandemic season, influenza surveillance has been expanded worldwide, as recommended by the world health organization (who) [43] [44] [45] 33, 34, 41 . as cnidrs includes all sentinel hospitals, sentinel hospitals are likely to report more cases of influenza to cnidrs. in addition, more hospitals have used electronic health information systems, which may improve both the quantity and quality of data collection and exchange from hospitals to cnidrs [46] [47] [48] [49] . fourth, the reporting on influenza a (h1n1)pdm09, avian influenza a (h7n9), highly pathogenic avian influenza (hpai) h5n1 and avian h6 influenza has increased in recent years 12 , which included all variables in the univariate analysis models; which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). www.nature.com/scientificreports www.nature.com/scientificreports/ media and public health campaigns against the new emerging virus have caused both the government and the public to be more concerned about influenza. the improved public perception of influenza may change people's health-seeking behaviours, especially in the epidemic seasons 52, 53 . furthermore, enlarged coverage of health care insurance in both urban and rural areas in recent years in china may also induce people to use more health services 54, 55 . a rapid increase in the numbers of airlines and high-speed railway transports in china has been reported in recent years 56 . these factors would make it easy to transmit the influenza virus at a larger scale and in a shorter time across the country 56-58 . the spatial pattern. the bym model includes both a spatial conditional autoregression component and a heterogeneous random effect component. this structure allows us to know how much of the residual disease risk is due to spatially structured variation and how much is unstructured overdispersion 22 . the spatially structured conditional autoregression term demonstrated areas of spatial patterning and similarity among prefectures. the results of spatially structured variation show a distinguished spatial pattern of risk of influenza across prefectures in china. the highest-risk areas clustered in the middle part of china, while the lowest-risk areas were distributed in the east, northwest and southwest. different patterns of influenza between the north and south in china were well reported 3, 16, 20, 34, 41, 59 . in china, the line following the qinling mountain range in the west and the huaihe river in the east is often used to split the mainland into the north and the south 34 . in this study, we observed clustering in both the north and the south in the middle part of china. the unique structured spatial patterns may be attributed to the shared risk factors among the neighbouring areas. this may be associated with similarities in the climatic zone, the predominant subtype of the virus at the time of epidemics, socioeconomic background or lifestyles. the last important factor should not be ignored. some studies reported that clustering of diseases may be a consequence of spatial heterogeneity in surveillance efforts 60,61 . the space-time interaction. the space-time interaction is a random effect term, which is interpreted as the residual effect after the unstructured, spatially structured and time effects are modelled and represent sporadic short-term outbreaks or clusters. the changes and circulations of virus subtypes may determine the characteristics of the space-time interaction terms. the year 2009 was the critical point according to the results of the spatiotemporal analysis. there are four types of ili activities: sporadic, local outbreak, regional outbreak and widespread outbreak in flunet (www. who.int/flunet), global influenza surveillance and response system (gisrs) 62, 63 . since the first case of influenza a (h1n1)pdm09 was reported on may 9, 2009, in mainland china, the type a (h1n1)pdm09 virus has been detected in all ili activities according to the data from flunet. the yearly ili activities may be partially associated with the changes and similarities in the patterns of the space-time interactions from 2005 to 2018. from the flunet data mentioned above, we found that sporadic ili activities were dominant in 2005, 2006, 2007 and 2008. correspondently, we found more areas with high relative risk in these 4 years in the space-time term. this implies that the more sporadic the activities are, the larger the variations in the spatiotemporal distribution of the risk of influenza. in contrast, the large outbreaks account for most ili activities in the years 2009, 2010, 2017 and 2018. few prefectures were observed to have a relative risk greater than 2 or 3 during that period. large outbreaks, especially large regional and widespread outbreaks, may reduce the differences in the incidence risk of influenza among the areas and times on a large scale. strengths. this work adds to the existing research on influenza epidemiology in the following ways. first, the study initially presents the spatiotemporal distributions with higher-resolution spatial data than has been reported in china for the last 14 years, which allows more opportunity for focused investigations and interventions. next, we used the exceedance probabilities instead of the observed risk estimates to identify those areas for which the increased risk was highly unlikely to be due to chance. then, this study also provided a baseline model that can be extended to include social, economic, ecological, and environmental factors, as well as intervention measures to explore their associations with influenza. finally, the methods in this study offer practical tools for spatial analysis of other notifiable infectious diseases in cnidrs. there are some limitations to this study. cnidrs is a passive surveillance system, and accessibility to health facilities and patient visit behaviour may affect the number of cases reported. we collected both clinically diagnosed and laboratory-confirmed cases in cnidrs, so misdiagnosis and misreporting are unavoidable because it is difficult to distinguish influenza from other respiratory viruses without laboratory testing, especially in the non-epidemic seasons. this paper outlined the application of the bayesian spatiotemporal model to assess the relative disease risk of influenza at the prefecture level in mainland china. we observed an increased incidence trend of influenza from 2005 to 2018 that was fairly steady in the first 4 years and increased rapidly in the last 3 years. clusters of prefectures with high relative risk values concerning influenza incidence were identified in the central part of china. the identification of high-risk areas is especially a priority in china because the limited resources available for disease control need to be focused on the places most in need. we hypothesize that the insufficient flu vaccine supplements, low vaccine uptake, the newly emerging influenza a (h1n1)pdm09 and expansion of influenza surveillance efforts might be the major causes of the dramatic changes in outbreak and spatiotemporal epidemic patterns. future research with more risk factors at the national and local levels is necessary to explain the changing spatiotemporal patterns of influenza in china. model specifications for spatiotemporal analysis. the besag york mollié (bym) convolution model was used as a baseline model 22 . using the notation of banerjee et al. 65 , the bym model is as follows: • n is the number of areas. the y i counts of influenza cases in area i are independently identically poisson distributed. θ i is the risk for area i. e i is the number of expected cases of influenza in area i, which acts as an offset. • α quantifies the average incidence risk of influenza in all the prefectures. • ν i is a spatially unstructured random effects component that is i.i.d normally distributed with mean zero. • υ i is a spatially structured component using an intrinsic conditional autoregressive structure (icar). the random effect for each area ζ i is thus the sum of a spatially structured component υ i and an unstructured component ν i . it is termed a convolution prior 22, 66 . the bym model was extended to include a linear term for space-time interaction and a nonparametric spatiotemporal time trend. possible random effects specifications for the temporal term include a linear time trend (β j ), a random time effect (γ j ), a first-order random walk (γ 1j ), a second-order autoregression (γ 2j ), etc. 25 . four types of interactions are proposed in knorr-held (2000) 28 , see knorr-held (2000) 28 for a detailed description. in this study, we assume no spatial and temporal structure on the interaction, and therefore, δ ij ∼ normal(0; τ δ ). four candidate models were tested and compared: in model 4, the space-time interaction is a random effect term and is interpreted as the residual effect after the unstructured, spatially structured and time effects are modelled and represent sporadic short-term outbreaks or clusters. model selection was based on deviance information criteria (dic), which take into consideration the posterior mean deviance, a bayesian measure of model fit, and the complexity of the model. a smaller dic indicates a better fit of the model 67 . the final linear model consisted of an intercept (α); a vector of national-level explanatory variables ∑ β = ( x ) k 1 n k k for the yearly total number of lot release of influenza vaccines by the china food and drug administration, the positive rate of influenza a (h1n1)pdm09 among the number of ili specimens processed, the percentage of influenza a (h1n1)pdm09 among all the positive influenza specimens, and protocol changes; a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ 1j ); and an interaction term for time and place (δ ij ). the prefecture-specific structured and unstructured spatial risks of influenza compared to the whole spatial risk of all prefectures are obtained by applying an exponential transformation to the components of ν i and υ i , respectively. the relative risk of space-time interaction is computed by the exponentiation of the term δ ij . the exceedance probabilities of spatial risk and risk of space-time interaction were also calculated. the exceedance probability represents the posterior probabilities for an area's spatial risk estimate exceeding some pre-set value and has been proposed as a bayesian approach to hotspot identification 68, 69 . all spatial models were computed using integrated nested laplace approximations (inla), which have been developed as a computationally efficient alternative to mcmc 70 . all spatial analyses were conducted within microsoft r open version 3.5 using the r-inla package (version 18.07.12). ethics approval. the authors assert that all of the procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and the helsinki declaration of 1975 as revised in 2008. this article does not contain any studies of human or animal subjects performed by any of the authors. since this analysis was based on anonymous aggregated statistical data, patient informed consent and ethical committee approval were not required in china. disclaimer. the views expressed are those of the authors and do not necessarily represent the official policy of the chinese center for disease control and prevention. the burden of influenza: a complex problem the substantial hospitalization burden of influenza in central china: surveillance for severe, acute respiratory infection, and influenza viruses estimates of global seasonal influenza-associated respiratory mortality: a modelling study pandemic influenza: certain uncertainties temporal patterns of influenza a and b in tropical and temperate countries: what are the lessons for influenza vaccination? plos one 11, e0152310 seasonal influenza vaccine supply and target vaccinated population in china seasonal influenza vaccination in china: landscape of diverse regional reimbursement policy, and budget impact analysis chinese vaccine scandal unlikely to dent childhood immunization rates china pharma crackdown leads to flu vaccine shortage the first confirmed human case of avian influenza a (h5n1) in mainland china h7n9 and h5n1 avian influenza suitability models for china: accounting for new poultry and live-poultry markets distribution data. stochastic environmental research and risk assessment: research journal 31 comparative epidemiology of human infections with avian influenza a h7n9 and h5n1 viruses in china: a population-based study of laboratory-confirmed cases probable limited person-to-person transmission of highly pathogenic avian influenza a (h5n1) virus in china geographic distribution and risk factors of the initial adult hospitalized cases of 2009 pandemic influenza a (h1n1) virus infection in mainland china distribution and risk factors of 2009 pandemic influenza a (h1n1) in mainland china transmission of pandemic influenza a (h1n1) virus in a train in china epidemiology of human infections with avian influenza a(h7n9) virus in china human infection with a novel avian-origin influenza a (h7n9) virus characterization of regional influenza seasonality patterns in china and implications for vaccination strategies: spatiotemporal modeling of surveillance data clinical features of the initial cases of 2009 pandemic influenza a (h1n1) virus infection in china bayesian image restoration, with two applications in spatial statistics bayesian analysis of space-time variation in disease risk geographical and environmental epidemiology: methods for small-area studies a primer on disease mapping and ecological regression using >{ exttt {inla}} > bayesian estimates of disease maps: how important are priors? diffusion and prediction of leishmaniasis in a large metropolitan area in brazil with a bayesian space-time model bayesian modelling of inseparable space-time variation in disease risk bayesian extrapolation of space-time trends in cancer registry data epidemiology of avian influenza a h7n9 virus in human beings across five epidemics in mainland china, 2013-17: an epidemiological study of laboratory-confirmed case series global epidemiology of avian influenza a h5n1 virus infection in humans, 1997-2015: a systematic review of individual case data emergence and control of infectious diseases in china comparing the similarity and difference of three influenza surveillance systems in china dual seasonal patterns for influenza clinical and epidemiologic characteristics of 3 early cases of influenza a pandemic (h1n1) 2009 virus infection, people's republic of china risk factors for severe illness with 2009 pandemic influenza a (h1n1) virus infection in china vaccine scandal and confidence crisis in china the effect of vaccine literacy on parental trust and intention to vaccinate after a major vaccine scandal association between vaccination coverage decline and influenza incidence rise among italian elderly the re-emergence of highly pathogenic avian influenza h7n9 viruses in humans in mainland china variation in influenza b virus epidemiology by lineage environmental predictors of seasonal influenza epidemics across temperate and tropical climates strategy to enhance influenza surveillance worldwide influenza epidemiology and influenza vaccine effectiveness during the 2014-2015 season: annual report from the global influenza hospital surveillance network distribution of influenza virus types by age using case-based global surveillance data from twenty-nine countries the primary health-care system in china using electronic health records data to evaluate the impact of information technology on improving health equity: evidence from china enabling health reform through regional health information exchange: a model study from china electronic recording and reporting system for tuberculosis in china: experience and opportunities estimated global mortality associated with the first 12 months of 2009 pandemic influenza a h1n1 virus circulation: a modelling study continued reassortment of avian h6 influenza viruses from southern china knowledge, attitudes and practices (kap) related to the pandemic (h1n1) 2009 among chinese general population: a telephone survey knowledge, attitudes and practices (kap) relating to avian influenza in urban and rural areas of china perceived challenges to achieving universal health coverage: a cross-sectional survey of social health insurance managers/administrators in china consolidating the social health insurance schemes in china: towards an equitable and efficient health system impacts of road traffic network and socioeconomic factors on the diffusion of 2009 pandemic influenza a (h1n1) in mainland china the roles of transportation and transportation hubs in the propagation of influenza and coronaviruses: a systematic review human mobility and the spatial transmission of influenza in the united states spatiotemporal distributions and dynamics of human infections with the a h7n9 avian influenza virus spatial distribution of bluetongue surveillance and cases in switzerland the evaluation of bias in scrapie surveillance: a review flunet as a tool for global monitoring of influenza on the web global influenza seasonality to inform country-level vaccine programs: an analysis of who flunet influenza surveillance data between epidemiological features of and changes in incidence of infectious diseases in china in the first decade after the sars outbreak: an observational trend study. the lancet infectious diseases hierarchical modeling and analysis for spatial data bayesian mapping of disease. markov chain monte carlo in practice 1 bayesian measures of model complexity and fit cluster detection diagnostics for small area health data: with reference to evaluation of local likelihood models space-time bayesian small area disease risk models: development and evaluation with a focus on cluster detection approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations this study was supported by grants from the key joint project for data center of the national natural science j.q. ma. conceived, designed, and supervised the study. y.w. zhang., x.f. wang. and y.f. li. collected and cleaned the data. y.w. zhang. analysed the data and wrote the drafts of the manuscript. j.q. ma. and y.w. zhang. interpreted the findings. all authors read and approved the final manuscript. the authors declare no competing interests. correspondence and requests for materials should be addressed to j.m. publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.open access this article is licensed under a creative commons attribution 4.0 international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. key: cord-026384-ejk9wjr1 authors: crilly, colin j.; haneuse, sebastien; litt, jonathan s. title: predicting the outcomes of preterm neonates beyond the neonatal intensive care unit: what are we missing? date: 2020-05-19 journal: pediatr res doi: 10.1038/s41390-020-0968-5 sha: doc_id: 26384 cord_uid: ejk9wjr1 abstract: preterm infants are a population at high risk for mortality and adverse health outcomes. with recent improvements in survival to childhood, increasing attention is being paid to risk of long-term morbidity, specifically during childhood and young-adulthood. although numerous tools for predicting the functional outcomes of preterm neonates have been developed in the past three decades, no studies have provided a comprehensive overview of these tools, along with their strengths and weaknesses. the purpose of this article is to provide an in-depth, narrative review of the current risk models available for predicting the functional outcomes of preterm neonates. a total of 32 studies describing 43 separate models were considered. we found that most studies used similar physiologic variables and standard regression techniques to develop models that primarily predict the risk of poor neurodevelopmental outcomes. with a recently expanded knowledge regarding the many factors that affect neurodevelopment and other important outcomes, as well as a better understanding of the limitations of traditional analytic methods, we argue that there is great room for improvement in creating risk prediction tools for preterm neonates. we also consider the ethical implications of utilizing these tools for clinical decision-making. impact: based on a literature review of risk prediction models for preterm neonates predicting functional outcomes, future models should aim for more consistent outcomes definitions, standardized assessment schedules and measurement tools, and consideration of risk beyond physiologic antecedents. our review provides a comprehensive analysis and critique of risk prediction models developed for preterm neonates, specifically predicting functional outcomes instead of mortality, to reveal areas of improvement for future studies aiming to develop risk prediction tools for this population. to our knowledge, this is the first literature review and narrative analysis of risk prediction models for preterm neonates regarding their functional outcomes. preterm infants have long been recognized as a population at high risk for mortality and adverse functional outcomes, including cerebral palsy and intellectual impairment. 1 as mortality rates for preterm neonates decline and more survive to childhood, 2,3 attention has increasingly turned towards measuring longer-term morbidities and related functional impairments during childhood and young-adulthood, as well as identifying risk factors related to these complications. 4, 5 while child-specific characteristics, such as gestational age, birth weight, and sex, are well established as predictors of adverse neurodevelopmental outcomes, 6-8 recent work has identified additional factors, including bronchopulmonary dysplasia and family socioeconomic status, that are correlated with relevant outcomes, such as poor neuromotor performance and low intelligence quotient at school age. 9 in clinical settings, the assessment of prognosis can vary widely across neonatologists, 10 making a valid and reliable predictive model for long-term outcomes a highly sought-after clinical tool. moreover, predicting outcomes is vital when making decisions regarding which therapeutic interventions to apply, when providing critical data to parents for informed decision-making, and when matching infants with outpatient services to best meet their needs. in addition, prediction models are useful in evaluating neonatal intensive care unit (nicu) performance and allowing for between-center comparisons with proper adjustment for the severity of cases being treated. 11 numerous prediction tools have been developed to quantify the risk of death for preterm neonates in the nicu setting, including the score for neonatal acute physiology (snap) and the clinical risk index for babies (crib). 12 the national institute of child health and human development (nichd) risk calculator, predicting survival with and without neurosensory impairment, is widely used to counsel families in the setting of threatened delivery at the edges of viability. 13 furthermore, there are numerous other models that use clinical data from the nicu stay to predict risk for poor functional outcomes in infancy and school age. 14, 15 while several studies have categorized and evaluated the risk prediction models developed and validated in recent decades for mortality, 12, 16 no studies have compared and contrasted risk prediction models for non-mortality outcomes. recently, linsell et al. 17 published a systematic review of risk factor models for neurodevelopmental outcomes in children born very preterm or very low birth weight (vlbw). however, this review focused primarily on overall trends in model development and validation rather than a detailed consideration of individual models. in this article, we conduct an in-depth, narrative review of the current risk models available for predicting the functional outcomes of preterm neonates, evaluating their relative strengths and weaknesses in variable and outcome selection, and considering how risk model development and validation can be improved in the future. towards this, we first provide an overview of the different risk models developed since 1990. we then frame our review of these models in terms of the outcomes predicted, the range of predictors considered, and the statistical methods used to select the variables included in the final model, as well as to assess the predictive performance of the model. finally, the ethical implications of integrating risk stratification into standard clinical care for preterm neonates are considered. we conducted a manual search for relevant literature via pubmed, entering combinations of key terms synonymous with "prediction tool," "preterm," and "functional outcome" and reading the abstracts of resulting studies (table 1 ). studies with abstracts that appeared related to our review were then read in full to identify prediction models that were eligible for inclusion. reference lists of included studies were also reviewed, as were articles that later cited these original studies. prediction tools were defined as multivariable risk factor analyses (>2 variables) aiming to predict the probability of developing functional outcomes beyond 6 months corrected age. models that solely investigated associations between individual risk factors and outcomes were excluded, as were models that were not evaluated for predictive ability in terms of either a validation study or an assessment for performance, discrimination, or calibration. tests used to evaluate a model's overall performance were r 2 , adjusted r 2 , and the brier score. the use of a receiver operating characteristic (roc) curve or a c-index evaluated a model's discrimination, and the hosmer-lemeshow test was considered to evaluate a model's calibration. 18 preterm neonates were defined as <37 weeks of completed gestational age. models with vlbw neonates <1500 g were also included, since in the past birth weight served as a substitute for measuring prematurity when gestational age could not be accurately determined. models were excluded if they used a cohort entirely composed of infants born prior to 1 january 1990; those born after 1990 were likely to have had surfactant therapy available in the event of respiratory distress syndrome, which significantly reduced the morbidity and mortality rates among preterm neonates nationwide. 19, 20 models were also excluded if they limited their prediction to the outcome of survival, if they incorporated variables measured after initial nicu discharge, or if they included subjects who were not necessarily transferred to a nicu for further care following delivery. finally, we excluded tools that only predicted outcomes to an age of <6 months corrected age, as well as case reports, narrative reviews, and tools reported in languages other than english. overview of risk prediction models table 2 lists all 32 studies with risk prediction models that meet the inclusion and exclusion criteria. [13] [14] [15] from these, a total of 43 distinct models were reported. from mortality to neurodevelopmental impairment since 1990, several mortality prediction tools have been evaluated in regards to their ability to predict the likelihood of neurodevelopmental impairment (ndi) among neonates surviving to nicu discharge. one such model is the crib, which incorporates six physiologic variables collected within the first 12 h of the preterm infant's life: birth weight, gestational age, presence of congenital malformations, maximum base excess, and minimum and maximum fio 2 requirement. 50 fowlie et al. 24 evaluated how crib models obtained at differing time periods over the first 7 days of life predicted severe disability among a group of infants born >31 weeks gestational age or vlbw. in another study, fowlie et al. 25 incorporated cranial ultrasound findings on day of life 3 along with crib scores between 48 and 72 h of life into their prediction model. subsequent studies analyzed the crib in its original 12-h form and, with only one exception, 23 determined that it was not a useful tool for predicting long-term ndi or other morbidities. [26] [27] [28] [29] a second example is the snap score. 51 snap uses 28 physiologic parameters collected over the first 24 h of life to predict survival to nicu discharge, and was modified to predict ndi at 1 year and 2-3 years of age. a subsequent assessment of both the snap and the snap with perinatal extension 42 showed a poor predictive value for morbidity at 4 years of age for children born vlbw and/or with gestational age ≤31 weeks. 28 finally, the neonatal therapeutic intervention scoring system, a comprehensive exam-based prediction tool for mortality, 52 was found to have a poor predictive value for adverse outcomes at 4 years of age in children born very preterm or vlbw. 28 shortened forms of the early physiology-based scoring systems were developed and assessed for their ability to predict outcomes in childhood. application of the crib-ii on a small cohort (n = 107) of infants born <1250 g predicted significant ndi at 3 years of age. 39 however, a subsequent evaluation in a much larger cohort (n = 1328) of preterm infants <29 weeks gestational age concluded that the crib-ii did no better than gestational age or birth weight alone in predicting moderate to severe functional disability at 2-3 years of age. 40 studies have supported an association between the snap-ii and snappe-ii scores and neurodevelopmental outcomes and small head circumference at 24 months corrected age. high snap-ii scores were shown to correlate with adverse neurological, cognitive, and behavioral outcomes up to 10 years of age within a large cohort (n = 874) of children born very preterm. 43 antenatal risk factors several groups have used data from the nichd's neonatal research network (nrn) to design and test various risk prediction models for extremely low birth weight (elbw) newborns. one of the most widely used risk prediction tools developed from this cohort was by tyson et al., 13 postnatal morbidity a large cohort study (n = 910) from schmidt et al. 15, 32 used data from elbw neonates 500-999 g enrolled in the international trial of indomethacin prophylaxis in preterms (tipp). they found that the presence of three morbidities at 36 weeks post-menstrual age -bronchopulmonary dysplasia, serious brain injury, and severe retinopathy of prematurity-had a significant and additive effect on the risk for death or poor neurologic outcome at 18 months corrected age. they developed a model from this relationship that has been corroborated in two studies with smaller samples and by schmidt et al. 15 in a separate, large cohort in which the definition of poor outcome was expanded from solely ndi to "poor general health." 33, 34 letting the machines decide some innovative work has been recently performed by ambalavanan et al. 14, 35 in creating several risk prediction models. 45 along with studies developing risk prediction tools with data from the nrn and the tipp to predict the outcomes of death and ndi or solely ndi, the group made the only risk prediction tool for the outcome of rehospitalization, both general and specifically for respiratory complications, using a combination of physiologic and socioeconomic variables incorporated into a decision tree approach. they have also been the only group to create neural network-trained models, using the same small cohort to predict major handicap, low mental development index (mdi), or low psychomotor development index (pdi). the advantage of using neural networks-algorithms that can "learn" mathematical relationships between a series of independent variables and a set of outcomes-is the ability to model complex or nonlinear relationships that can be elucidated by the model without having to consider these relationships a priori (as is typically required when using multiple regression models). despite the use of innovative approaches, however, none of these models differed from other studies in predictive strength or even had high predictive efficacy. 31 limitations of prior approaches the above literature review highlights the substantial interest in developing a clinically useful risk prediction model and the limits of efforts to date. notwithstanding their differing inclusion and exclusion criteria, existing risk prediction models are relatively similar in terms of variables selected, outcomes analyzed, and statistical strategies employed. with few exceptions, the limitations of existing risk prediction models are especially apparent in their reliance on solely biologic variables and traditional analytic methods ill-equipped to handle the statistical complexity necessary for risk modeling. identifying important outcomes. the majority of risk prediction models defined ndi as their primary outcome of interest. making a determination of impairment often relies on standardized measures of cognition in concert with neurosensory deficits. yet, researchers often define ndi in different ways, making betweenstudy comparisons difficult. ndi is a construct relating to global abilities encompassing cognition, language, motor function, and vision and hearing. while the tools used to identify ndi are often also used to make diagnoses of developmental delay, ndi is not a clinical term or diagnosis in and of itself. many of the remaining studies also predicted functional outcomes, such as academic performance, executive function, language ability, and autism spectrum disorder (asd). these outcomes may be more meaningful to parents and providers than ndi. 54 to date, only four studies have considered outcomes unrelated to neurodevelopment, such as impaired pulmonary function, "poor general health," and rehospitalization rates. 15, 28, 45, 49 while the emphasis on ndi is unsurprising given the high-risk population, moderate to severe ndi only affects a minority of the preterm population. 55, 56 studies have revealed numerous additional adverse outcomes that preterm individuals are more likely to experience compared to their full-term counterparts, such as impaired respiratory, cardiovascular, and metabolic function. [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] neurodevelopment has been linked to chronic health problems in later childhood. 67 limiting risk prediction to moderate to severe ndi therefore ignores other, more common complications that preterm infants are likely to face that have an impact on neurodevelopment. this represents a missed opportunity for researchers to better understand what variables influence the likelihood that these problems occur. the impact of developmental disability on the child and family is completely absent from current risk models. health-related quality of life (hrql), which distinguishes itself as a personal rather than third-party valuation of a patient's physical and emotional well-being, is being increasingly appreciated as an important metric necessary to fully understand the impact of prematurity. 68 in a french national survey, the majority of neonatologists, obstetricians, and pediatric neurologists stated that predicting hrql in the long term for preterm infants would be beneficial for consulting parents about what additional responsibilities they can anticipate in caring for their child. 69 the trajectory of hrql from childhood to young-adulthood appears to improve in both vlbw and extremely low gestational age populations. 70 prediction modeling might aid in determining which factors could positively or negatively impact hrql in this vulnerable population. finally, we must consider the age at which outcomes are being predicted. it is evident that lower gestational age is inversely proportional to rates of ndi and academic achievement in adolescence. 71, 72 however, the vast majority of risk prediction models assessed outcomes at the age of 3 years or less, with only three studies doing so at 10 years of age or above. although early childhood outcomes may give clues about later development, many problems do not manifest until later in childhood, such as learning disabilities and certain psychiatric disorders. developmental disability severity can fluctuate throughout childhood, with catch-up occurring in early preterm children and worsening delay in some moderate and late preterm children. 73, 74 although cohorts of preterm infants are not usually followed for more than several years, likely due to lack of resources and expense, recent studies have used data from national registries to link neonatal clinical data to sampled adults, providing evidence of increased rates of adverse neurodevelopmental, behavioral, and educational outcomes among adults born preterm. 75, 76 opportunities are therefore available to use long-term data to extend risk prediction models beyond the first few years of life. variable selection. most of the risk models reviewed relied primarily on physiologic and clinical measures obtained during the nicu stay. while an emphasis on biologic risk factors is clearly reasonable given the known associations between perinatal morbidities and long-term outcomes, there is strong evidence in the literature suggesting associations between sociodemographic factors like parental race, education, and age, and outcomes such as cognitive impairment, cerebral palsy, and mental health disorders in children born preterm. more specific socioeconomic variables such as lower parental education, maternal income, insurance status, foreign country of birth by a parent, and socioeconomic status as defined by the elly-irving socioeconomic index have been repeatedly correlated with reduced mental development index, psychomotor development index, intelligence quotient, and social competence throughout childhood. 71, 72, [77] [78] [79] [80] [81] [82] the geographic area in which preterm neonates are raised could also have a profound influence on their development. neighborhood poverty rate, high school dropout rate, and place of residence (metropolitan vs. non-metropolitan) have all been correlated with academic skills and rate of mental health disorders among low birth weight children. 83, 84 only 12 of the 43 models reviewed included socioeconomic variables. this may be due, at least in part, to the difficulty in obtaining social, economic, and demographic data; these variables are often not collected upon hospital admission. additionally, socioeconomic information is often poorly, inaccurately, and variably recorded or is largely missing. 85 some risk prediction models collected socioeconomic variables at the follow-up visit when outcomes were assessed. this is an imperfect method given that factors such as household setting and family income may change substantially in the years following nicu discharge and affect children's health. 86, 87 in some models, socioeconomic variables were not included because they did not significantly improve the model's predictive ability. 45 testing the effects of social factors on infant and child outcomes requires samples that are socially and economically diverse. even large, diverse study populations may become more homogeneous over time, as subjects of lower socioeconomic status and non-white race are more likely to drop out of studies dependent on long-term follow-up. 41 and treating socioeconomic variables as statistically independent factors rather than interrelated might minimize the impact of contextual information on neurodevelopmental outcomes. model development. of the 32 papers included in the review, 12 reported on de novo risk prediction tools. the other 20 studies either evaluated a previous model or adjusted a prior model by changing the times at which data were collected or by adding additional variables. the approach to prediction tool development was almost uniform among the studies, with nine of the models solely using regression techniques to select variables. ambalavanan et al. deviated from this method in three separate studies: two using classification tree analysis, 35, 45 and one using a four-layer back-propagation neural network. 31 each new model-with the exception of the neural networkbased model by ambalavanan et al. 35, 45 -depended on an approach in which individual variables were selected and treated as independent of one another as they were analyzed in their ability to predict the outcome of interest. yet, variables may, in fact, not act independently. while parsing the roles of potential interrelationships may be computationally onerous and treating them independently may lead to a more parsimonious model, this may be at the expense of accuracy. alternative computational approaches are needed to account for the differential likelihoods of certain outcomes on the causal pathway from preterm birth to later childhood outcome. nonlinear statistical tools should be further utilized in risk prediction model development to examine the relationships between variables and outcomes of interest. machine learning, for instance, is a method of inputting a group of variables and generating a predictive model without making assumptions of independence between the factors or that specific factors would contribute the most to the model. 88 different forms of machine learning have already been employed in nicu's to extract the most important variables for predicting outcomes such as days to discharge. 89 the non-independence of risk factors is also complicated by the role of time in models of human health and development. the lifecourse framework describes how an accumulation or "chains" of risk experienced over time and at certain critical periods impact later health outcomes. 90 in the context of preterm birth, the risk of being born early is not uniform across populations and dependent on a given set of maternal risks. in turn, the degree of prematurity imparts differential risk for developing complications such as bronchopulmonary dysplasia, necrotizing enterocolitis, or retinopathy of prematurity. these morbidities then, in turn, increase risks for further medical and developmental impairment. these time-varying probabilities can be modeled and incorporated into prediction tools to more accurately capture the longitudinal and varying relationships between exposures and outcomes and improve thereby estimations of risk. [91] [92] [93] a final methodological concern regarding model development is whether and how the competing risk of death is considered when the outcome being predicted is non-terminal. consider, for example, the task of developing a model for the risk of ndi at 10 years of age. how one handles death can have a dramatic effect on the model, especially since mortality is relatively high among preterm infants. moreover, if death is treated simply as a censoring mechanism, as it is often done in time-to-event analyses such as those based on the cox model, then the overall risk of ndi will be artificially reduced; those children who die before being diagnosed with ndi will be viewed as remaining at risk even though they cannot possibly be subsequently diagnosed with ndi. while an alternative to this would be to use a composite outcome of time the first of ndi or death, doing so may result in a model that is unable to predict either event well. instead, one promising avenue is to frame the development of a prediction model for ndi within the semi-competing risks paradigm. 94, 95 briefly, semicompeting risks refer to settings where one event is a competing risk for the other, but not vice versa. this is distinct from standard competing risks, where each event is competing for the other (e.g., death due to one cause or another). to the best of our knowledge, however, semi-competing risks have not been applied to the study of long-term outcomes among preterm infants. model evaluation. waljee et al. 18 provide a summary of methods for assessing the performance of a predictive model, categorizing them into three types: overall model performance, which focuses on the extent of variation in risk explained by the model; calibration, which assesses differences between observed and predicted event rates; and discrimination, which assesses the ability to distinguish between patients who do and do not experience the outcome of interest. the majority of studies in our review assessed their models with roc curve analysis, a method of assessing discrimination. while widely used, there is some debate with regard to roc-based assessments, specifically in regard to its lack of sensitivity in assessing differences between good predictive models. 96 although several novel performance measures for comparing discrimination among models have been proposed, none have been employed in the context of comparing risk prediction tools for preterm neonates. 97, 98 few studies employed analyses other than roc. only six in our review assessed overall performance with r 2 or partial r 2 , and five evaluated calibration using the hosmer-lemeshow test. another four studies assessed internal validation with either an internal validation set or bootstrapping techniques. 99 there were nine studies meeting inclusion criteria solely because they had models that were externally validated via other studies. schmidt et al. 32 reported odds ratio associations for their 3-morbidity model, which are not a reliable method of determining the strength of risk prediction tools. 100 future risk model assessments for preterm neonates should at minimum include an roc curve analysis, although assessments of overall performance and calibration would also be helpful. validation with a different sample from the development set is also advised, ideally with a population outside the original cohort. 18 conclusion risk assessment and outcomes prediction are valuable tools in medical decision-making. fortunately, infants born prematurely enjoy ever-increasing likelihood of survival. research over the past several decades has highlighted the many influences, physiologic and psychosocial, affecting neurodevelopment, hrql, and health services utilization. yet, the wealth of knowledge gained from longitudinal studies of growth and development is not reflected in current risk prediction models. moreover, some of the most wellknown and widely used tools today, such as tyson et al.'s 13 fivefactor model, were developed nearly two decades ago. as advances in neonatal intensive care progressively reduce the risk of certain outcomes, it is clear that these older models require updating if they are to be of continued clinical use. it should be recognized that there are potential ethical ramifications to incorporating more psychosocial factors and outcomes into risk prediction models, such as crossing the line from risk stratification to "profiling" patients and offering different treatment decisions based on race or class. 101 however, physician predictions without the aid of prediction tools are highly inconsistent during counseling at the margins of viability, and further research is needed regarding the level of influence that physicians actually have on caregiver decision-making during counseling, as well as the extent to which risk prediction tools would change their approach to counseling. 10 in addition, despite recent innovation in statistical approaches to risk modeling, such as machine learning, most prediction tools rely on standard regression techniques. insofar that risk prediction models will continue to be developed for preterm neonatal care, making use of the clinical data available in most modern electronic health records and taking into consideration the analytic challenges related to unequal prior probabilities of exposures, non-independence of variables, and semi-competing risk can only strengthen our approach to predicting outcomes. we therefore recommend taking a broader view of risk, incorporating these concepts in creating stronger risk prediction tools that can ultimately serve to benefit the long-term care of preterm neonates. c.j.c. and j.s.l. designed and carried out this literature review. c.j.c., j.s.l., and s.h. worked jointly in the analysis and interpretation of the literature review results, as well as the drafting and revision of this article. all three authors gave final approval of the version to be published. on the influence of abnormal parturition, difficult labours, premature birth, and asphyxia neonatorum, on the mental and physical condition of the child, especially in relation to deformities trends in care practices, morbidity, and mortality of extremely preterm neonates survival of infants born at periviable gestational ages outcomes of preterm infants: morbidity replaces mortality institute of medicine committee on understanding premature birth and assuring healthy outcomes. preterm birth: causes, consequences, and prevention influence of birth weight, sex, and plurality on neonatal loss in united states preterm neonatal morbidity and mortality by gestational age: a contemporary cohort gestational age and birthweight for risk assessment of neurodevelopmental impairment or death in extremely preterm infants neurodevelopmental outcome at 5 years of age of a national cohort of extremely low birth weight infants who were born in 1996-1997 comparing neonatal morbidity and mortality estimates across specialty in periviable counseling prognosis and prognostic research: what, why, and how? neonatal disease severity scoring systems intensive care for extreme prematurity-moving beyond gestational age outcome trajectories in extremely preterm infants prediction of late death or disability at age 5 years using a count of 3 neonatal morbidities in very low birth weight infants prediction of mortality in very premature infants: a systematic review of prediction models risk factor models for neurodevelopmental outcomes in children born very preterm or with very low birth weight: a systematic review of methodology and reporting a primer on predictive models pulmonary surfactant therapy the future of exogenous surfactant therapy nursery neurobiologic risk score and outcome at 18 months evaluation of the ability of neurobiological, neurodevelopmental and socio-economic variables to predict cognitive outcome in premature infants. child care health dev increased survival and deteriorating developmental outcome in 23 to 25 week old gestation infants, 1990-4 compared with 1984-9 measurement properties of the clinical risk index for babies-reliabilty, validity beyond the first 12 hours, and responsiveness over 7 days predicting the outcomes of preterm neonates beyond the neonatal intensive predicting outcome in very low birthweight infants using an objective measure of illness severity and cranial ultrasound scanning is the crib score (clinical risk index for babies) a valid tool in predicting neurodevelopmental outcome in extremely low birth weight infants? the crib (clinical risk index for babies) score and neurodevelopmental impairment at one year corrected age in very low birth weight infants can severity-of-illness indices for neonatal intensive care predict outcome at 4 years of age? neurodevelopment of children born very preterm and free of severe disabilities: the nord-pas de calais epipage cohort study chronic physiologic instability is associated with neurodevelopmental morbidity at one and two years in extremely premature infants prediction of neurologic morbidity in extremely low birth weight infants impact of bronchopulmonary dysplasia, brain injury, and severe retinopathy on the outcome of extremely low-birth-weight infants at 18 months: results from the trial of indomethacin prophylaxis in preterms impact at age 11 years of major neonatal morbidities in children born extremely preterm effect of severe neonatal morbidities on long term outcome in extremely low birthweight infants early prediction of poor outcome in extremely low birth weight infants by classification tree analysis consequences and risks of <1000-g birth weight for neuropsychological skills, achievement, and adaptive functioning clinical data predict neurodevelopmental outcome better than head ultrasound in extremely low birth weight infants infant outcomes after periviable birth; external validation of the neonatal research network estimator with the beam trial clinical risk index for babies score for the prediction of neurodevelopmental outcomes at 3 years of age in infants of very low birthweight nsw and act neonatal intensive care units audit group. can the early condition at admission of a high-risk infant aid in the prediction of mortality and poor neurodevelopmental outcome? a population study in australia autism spectrum disorders in extremely preterm children snap-ii and snappe-ii and the risk of structural and functional brain disorders in extremely low gestational age newborns: the elgan study early postnatal illness severity scores predict neurodevelopmental impairments at 10 years of age in children born extremely preterm high prevalence/low severity language delay in preschool children born very preterm identification of extremely premature infants at high risk of rehospitalization screening for autism spectrum disorders in extremely preterm infants perinatal risk factors for neurocognitive impairments in preschool children born very preterm correlation between initial neonatal and early childhood outcomes following preterm birth bronchopulmonary dysplasia and perinatal characteristics predict 1-year respiratory outcomes in newborns born at extremely low gestational age: a prospective cohort study the international neonatal network. the crib (clinical risk index for babies) score: a tool for assessing initial neonatal risk and comparing performance of neonatal intensive care units score for neonatal acute physiology: a physiologic severity index for neonatal intensive care neonatal therapeutic intervention scoring system: a therapy-based severity-of-illness index prediction of death for extremely premature infants in a population-based cohort parental perspectives regarding outcomes of very preterm infants: toward a balanced approach risk of developmental delay increases exponentially as gestational age of preterm infants decreases: a cohort study at age 4 years preterm birth-associated neurodevelopmental impairment estimates at regional and global levels for 2010 late respiratory outcomes after preterm birth respiratory health in pre-school and school age children following extremely preterm birth preterm delivery and asthma: a systematic review and metaanalysis preterm birth, infant weight gain, and childhood asthma risk: a meta-analysis of 147,000 european children preterm birth: risk factor for early-onset chronic diseases preterm heart in adult life: cardiovascular magnetic resonance reveals distinct differences in left ventricular mass, geometry, and function right ventricular systolic dysfunction in young adults born preterm elevated blood pressure in preterm-born offspring associates with a distinct antiangiogenic state and microvascular abnormalities in adult life preterm birth and the metabolic syndrome in adult life: a systematic review and meta-analysis prevalence of diabetes and obesity in association with prematurity and growth restriction prematurity: an overview and public health implications measurement of quality of life of survivors of neonatal intensive care: critique and implications quality of life assessment in preterm children: physicians' knowledge, attitude, belief, practice -a kabp study health-related quality of life and emotional and behavioral difficulties after extreme preterm birth: developmental trajectories prognostic factors for poor cognitive development in children born very preterm or with very low birth weight: a systematic review prognostic factors for cerebral palsy and motor impairment in children born very preterm or very low birthweight: a systematic review evidence for catchup in cognition and receptive vocabulary among adolescents born very preterm the economic burden of prematurity in canada changing definitions of long-term followup: should "long term" be even longer? functional outcomes of very premature infants into adulthood social competence of preschool children born very preterm prediction of cognitive abilities at the age of 5 years using developmental follow-up assessments at the age of 2 and 3 years in very preterm children predicting the outcomes of preterm neonates beyond the neonatal intensive perinatal risk factors of adverse outcome in very preterm children: a role of initial treatment of respiratory insufficiency? the relationship between behavior ratings and concurrent and subsequent mental and motor performance in toddlers born at extremely low birth weight prognostic factors for behavioral problems and psychiatric disorders in children born very preterm or very low birth weight: a systematic review neurodevelopmental outcomes of extremely low birth weight infants <32 weeks' gestation between neighborhood influences on the academic achievement of extremely low birth weight children mental health outcomes in us children and adolescents born prematurely or with low birthweight measurement of socioeconomic status in health disparities research family income trajectory during childhood is associated with adiposity in adolescence: a latent class growth analysis family income trajectory during childhood is associated with adolescent cigarette smoking and alcohol use machine learning in medicine: a primer for physicians predicting discharge dates from the nicu using progress note data a life course approach to chronic diseases epidemiology 2nd edn. a life course approach to adult health scientists rise up against statistical significance the asa's statement on p-values: context, process, and purpose time for clinicians to embrace their inner bayesian? reanalysis of results of a clinical trial of extracorporeal membrane oxygenation semi-competing risks data analysis: accounting for death as a competing risk when the outcome of interest is nonterminal beyond composite endpoints analysis: semicompeting risks as an underutilized framework for cancer research use and misuse of the receiver operating characteristic curve in risk prediction assessing the performance of prediction models: a framework for traditional and novel measures novel metrics for evaluating improvement in discrimination: net reclassification and integrated discrimination improvement for normal variables and nested models multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors limitations of the odds ratio in gauging the performance of a diagnostic, prognostic, or screening marker just health: on the conditions for acceptable and unacceptable priority settings with respect to patients' socioeconomic status auc: 0.703 sensitivity: 27.6% specificity: 87.3% competing interests: the authors declare no competing interests.publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. key: cord-256432-53l24le2 authors: yang, honglin; pang, xiaoping; zheng, bo; wang, linxian; wang, yadong; du, shuai; lu, xinyi title: a strategy study on risk communication of pandemic influenza: a mental model study of college students in beijing date: 2020-09-04 journal: risk manag healthc policy doi: 10.2147/rmhp.s251733 sha: doc_id: 256432 cord_uid: 53l24le2 purpose: to understand the characteristics of risk perception of influenza pandemic in college students with prominent frequency and the differences between these risk perceptions and professionals. then, offering a proposal for the government to improve the efficiency of risk communication and health education. methods: according to the mental model theory, researchers first draw a framework of key risk factors, and then they ask these students about the understanding of the framework with questionnaire and then making concept statistics and content analysis on the respondents’ answers. results: researchers find some students’ misunderstanding of pandemic including excessive optimism to the consequences of a pandemic, a lack of detailed understanding of mitigation measures, and negative attitudes towards health education and vaccination. most students showed incomplete and incorrect views about concepts related to the development and exposure factors, impact and mitigation measures. once threatened, it may lead to the failure of decision-making. the majority of students we interviewed had positive attitudes towards personal emergency preparedness for a pandemic influenza and specialized health education in the future. conclusion: researchers suggest that the government should make a specific pandemic guidance plan by referring to the risk cognitive characteristics of college students shown in the research results, and update the methods of health education to college students. influenza, which is a highly variable infectious disease that can quickly evolve into a pandemic, can pose a significant threat to people's health. 1 the corresponding emergency response measures require the active cooperation of the public to work effectively. because of its wide range of impact and potential mortality, effective risk communication will help the public understand information related to influenza. 2 compared to risk communication in other fields, when public health events occur, the government often turns to experts to ask them what the public should know. so, it is a challenge that how to effectively transform scientific knowledge into useful structures and non-professional backgrounds. 3 our researchers use influence diagrams from mental model interview to analyze the critical risk factors of flu, which can improve student`s ability of decision-making to maintain their physical health. [4] [5] [6] [7] [8] morgan et als monograph on mental model theory argues that everyone relies on their mental models to understand information. it grows into a unique and intrinsic pattern as individuals grow, similar to a workflow chart. splitting the outside world into multiple components to help us understand may not be perfect; however, it affects our way of thinking and behavior choice. 3, 9, 10 a person's mental model is influenced by various factors, including personal experience, acquired learning, and living environment, and these factors are changeable and also important in affecting our health-related behaviors. 3, [11] [12] [13] [14] [15] therefore, targeted education can help an individual correct misunderstanding in their mental models and then improve their risk management. in china, there is no application of mental model theory in the field of health education and no special pandemic preparedness guideline for the general public. however, in western countries, particularly the united states, many scholars have conducted substantial research in this area. lazrus et al have studied the public mountain flood communication framework in boulder county, colorado state. 16 casman et al 17 used the influence map to establish a dynamic risk model for waterborne cryptosporidiosis, which defines "key awareness variables" in risk communication and assigns scores for evaluation. our researchers hope to use the mental model theory to analyze the most critical risk factors of influenza pandemic from a broader perspective and find out college students` risk perception of these factors. the understanding and cognitive characteristics help improve the communication work of the government, which is the aim of this article. this study refers to the impact map formed by morss et al in the flood risk communication of boulder county 18 and draws the risk factor framework of the influenza pandemic. the entire frame is an analysis of disaster events from a macro perspective, including "causes," "development," "response," "event impact" and "risk information dissemination." then, through literature research and expert consultation, the researchers summarized the concept of the communication framework and initially formed its content suitable for the influenza epidemic. the content of the whole frame consists of the causes of influenza epidemics, the impact of pandemics, emergency preparedness and strategies of different groups, risk information, and emergency response decisions, as shown in figure 1 . the researchers subsequently searched for the corresponding supporting documents according to the content of the framework and conducted expert seminars. combined with the materials of the literature and expert opinions, the authors initially wrote identical concept items under each part of the frame. finally, we used the delphi method to invite 18 experts from the related fields to judge the structure, importance and scientific nature of these items. 19 the purpose of mental model interviews is to determine which concepts or beliefs are "out there" with sufficient frequency such that in smaller samples, these concepts or beliefs become reasonable. there is no standard method for determining sample size in relevant theories and research practice. 3 according to professor morgan's monograph and related research examples, the sample size for a mental model interview should be 20~30, at which point new information has reached saturation. 3 based on these research facts, combined with the research design of lazrus and morss, 16, 18 we recruited the first 30 respondents from 5 randomly selected non-medical college by telephone and posters. to avoid confounding bias, these students are also from non-medical majors (including russian, finance, urban planning and marketing) and have not studied medical related professional courses. after all the investigations have been completed, we discussed the results and deleted two poor interview results, and then drew a line chart of information saturation according to the number of concepts mentioned by the respondents in figure 2 . we found that after the 22nd interviewee, information saturation began to show a downward trend, and subsequent respondents did not propose new concepts. we believe that the information provided by these 28 respondents can meet the sample size required for the analysis of this study, because the purpose of mental model study is not to use statistical methods to analyze the distribution of some risk cognition in the population, but to find out which concepts or beliefs, are "out there" with some reasonable frequency, 3 so as to help government departments identify what should be focused on when developing guidance programs and health education materials for this population. the interview began with an open question, such as "please tell us about the pandemic." our investigators guided the respondents to elaborate on their main concepts, then details of the outbreak, as well as the mitigation measures that should be employed. if the interviewer had experienced emergencies, then they were encouraged to talk about the decision or idea at that time. the interview results were subsequently transcribed, encoded and classified using the coding software atlas.ti. we also conducted a quantitative analysis of the results of the compilation, then created a statistical chart, observed the degree of attention of the respondents, and compared these results with the risk perception of experts to determine the interviewee's understanding of the related concepts and other features. the questions used in this interview refer to a questionnaire in the study of skarlatidou et al. 20 the interview covers the content in figure 1 . two researchers simultaneously coded the results of the interviews. the classification consistency index (holsti reliability) of the coder was subsequently calculated, 21 which fluctuated between 0.624 and 0.965, and the average reliability statistic was 0.749. according to the study of boyatzis and burrus, the coding reliability of trained different coders ranges from 0.74 to 0.80; 22 therefore, the reliability of the coder was within the normal range and displayed adequate consistency. information saturation trend provided by 28 respondents. for each of the respondents' answers to the number of concepts noted, the researchers first mapped the scatter plots. then, to better show the increase and decrease in the information provided by the respondents, polylines were used to connect the points. the content of the concept is derived from the framework of figure 3 and is described by the responses of all 28 respondents. here is the result of two rounds of delphi expert consultation. the value of the authority coefficient is 0.885 (>0.70), which indicates that the study has a good expert score. 19, 23, 24 as shown in table 1 , in the first round of expert consultation, the coordination coefficient of each item was 0.291 (p<0.001), and in the second round of expert consultation, the coordination coefficient was 0.324 (p<0.001), which was better than the first round and indicates that the opinions of the experts are consistent from the perspective of significance test. 25 finally, we created a communication framework for an influenza pandemic, as shown in figure 3 . it serves as the basis for our investigation of the problem content of college students and can also be regarded as a kind of "standardized communication content". the respondent may have a higher probability of taking the correct protective measures if he has a good understanding of the entire framework. communication framework of pandemic influenza. the frame is composed of six main conceptual dimensions; the central concept is the bold label, and the 2ndlevel concept in the box is the part. more complicated concepts in the framework are omitted; refer to the coding manual in the appendix. the whole frame contains 79 concepts, and the arrowhead represents the influence relationship of each part. the analogy part is listed separately to describe the events associated with the respondents. note: table 1 shows the statistical coefficient calculation results of the two delphi studies, and the p values of the two coefficients all meet the requirements. the researchers counted the percentage of respondents that mentioned a concept item. also, this study used a stacked bar chart to show the number of concepts mentioned by the 28 respondents ( figure 3 ). as shown in the graph, we distinguish the concept of different attributes in terms of dimensions (risk factors). the richness of the color can visually distinguish the depth of the mental model of each interviewee [the number of concepts mentioned by an interviewee], and we can determine which dimensions of the expert`s risk perception the public is highly aware of and in which areas the public lacks awareness. furthermore, the length of the bar graph reflects the number of concepts mentioned in the dimension: a taller bar graph reflects more relevant concept items indicated by the respondents and a deeper degree of understanding of the related content. for example, respondents 12, 16 and 21 knew more about the emergency response decisions during the pandemic, whereas interviewee #24 was less aware in this regard. figure 4 shows the differences in thinking about the risk of and coping with the influenza pandemic among different groups. even with a higher education level, each college student interviewee displayed a significant difference in the depth and detail of their mental model. some of the respondents' mental models appear particularly "scarce" (such as respondents #2 and 25). nearly all respondents discussed less information than the risk perception of experts. only one interviewee (interviewee #9) cited concepts that reflected almost all the parts of the communication framework in figure 2 . the other students did not suggest many more new concepts in the interview. their conceptual descriptions reflect the concern for specific content and common cognitive deficiencies and misunderstandings. the following sections discuss these best features of the interview answers. the interactions between multiple factors may affect the formation and development of pandemic influenza. several factors mentioned by our respondents are shown in table 2 ; 39% of the respondents believed that influenza virus variation was an essential cause of the pandemic. they used statements such as "new virus," "virus mutation," and "an unknown virus." additionally, 32% of the respondents referred to disease surveillance, which included "poor supervision of the source of infection" and "unchecked work", and they were more inclined to use terms to express their views (for example, "gene mutation", "isolation treatment", "infrared surveillance", and "take the body temperature"). forty-six percent of the respondents cited characteristics related to the international spread of the pandemic. interviewee #6 indicated "foreign virus carriers from foreign places into beijing." however, some respondents believed that climate factors could lead to flu cases because they confused pandemic influenza with seasonal flu, such as interviewee #7, who answered "when the seasons change, people may catch a cold easily. if they do not pay attention, a pandemic will happen if they don`t do that." many respondents (46%) also cited the impact of population density, including densely populated places and more floating cities with higher risk areas for influenza. other factors were less frequently cited by less than 17% of interviewee, including virus resistance, viral power, avian influenza immunity, and a human lack of immunity to new viruses. compared to the experts, the mental models of many of the students interviewed contained only part of the communication framework. although some key factors were cited by most of the respondents, other essential factors were rarely cited or were misunderstood by the respondents. for example, interviewee #16 believed that the flu was a "foodborne disease" and "caused by drugs." for individuals infected with influenza, no respondents discussed the impact of vulnerable groups on the development of the pandemic, and there was no further detailed description of the virus variation. a full understanding of these information can help people to evaluate the risk level in the environment, including which situations may have a higher risk of infectious diseases. another neglected concept is the lethality of the virus. no respondents mentioned this concept or discuss the content related to us. in fact, the lethal rate is also an important indicator of a new infectious virus. 26 from the perspective of scientific disease control, the lethal rate affects whether the virus has the characteristics of limited regional transmission (for example, ebola virus, its lethal rate is 50-90%, making the virus only intermittently epidemic in individual countries and regions, with certain limitations in time and space.) 27 from the perspective of promoting public participation in disease response, highrisk events can promote individual polar to make protective decisions. 28 knowing the virulence of the virus can avoid the negative attitude to personal disease prevention caused by fluke psychology. as shown in table 3 , approximately 29% of the respondents discussed the fatality of the flu, while only 14% of the respondents described the severe symptoms that could occur after the infection, such as interviewee #5: " . . . if there goes a pandemic, it would be more than a common cold. runny nose and sneezing or, maybe, pneumonia?" none of the respondents cited complications related to influenza infection. even if a real pandemic is only composed of common symptoms of fever and fatigue, complications such as pneumonia, myocarditis, and bronchitis are the real causes of death in some vulnerable patients. 29, 30 therefore, although most of the respondents understood that the flu could have serious health threats, they did not understand how people die as a result of the flu. these misunderstandings may be related to some respondents' personal and onesided understanding of the pandemic and the lack of targeted health education. for example, interviewee #10 stated "that is, people usually do not pay attention to clothes, then they catch a cold. it is quite a normal situation every year." most respondents also discussed the social and economic impacts of the pandemic, and 46% of the respondents referred to negative effects on schools, shops, public transport, and other infrastructure during the pandemic, such as interviewee #14: "schools may shut down . . . the shops outside may be closed because of this disease, and the economy may be seriously affected because everyone will hide at home." most of the types of infrastructure, of which transportation was the most frequently cited, were generally quoted as examples of people during the sars or bird flu period, such as interviewee #11 who stated, "everyone is not going out at the time of the outbreak . . . wearing a mask if you have to go outside." thirty-two percent of the respondents were worried about overburdened hospital patients during the pandemic. some of the respondents (28%) also imagined disastrous consequences, including the impact of the pandemic on the community. according to interviewee #16, for a long time . . . our life may be threatened, many people steal food and drugs and will be locked inside their house . . . not just the direct impact, it will bring other serious problems. although the respondents mentioned the relevant concepts in the communication framework, they fail to understand the severe damage that pandemic influenza could cause to individual health; moreover, they are not fully aware of panic actions during the outbreak. the most common panic behavior is to escape from the epidemic area. to avoid disaster is people's instinctive behavior, especially in the outbreak of infectious diseases. 31 in fact, during the outbreak of the novel coronavirus (covid-19) in china, people in some areas fled the outbreak area. and it happened to be the chinese new year's holiday. lots of college students returned home to celebrate festival, which strongly increased the risk of virus transmission. although these situations have not caused irreparable serious consequences, they have also brought great interference to the case investigation and disease monitoring in all provinces of the country. surprisingly, there are 30% of the respondents believed that a negative impact of a flu pandemic would be minimal or more positive, and nearly all of them stated that it "feels like the pandemic is far away from me." according to interviewee #23, it "is a kind of epidemic disease, but speaking of cold and flu, what is generally not a major disease, easier to treat the feeling, plus the pandemic, it is only a larger scope of infection, right?" the content reflects that some students do not pay substantial attention to public health and their health. more people choose to passively wait and accept the strategies and measures employed by the school or the state government; they lack the initiative to understand the relevant information and take preventive actions. the coping strategies in table 4 are essential to pandemic emergency work and a necessary part of the communication framework in figure 2 . twenty-nine percent of the respondents cited the importance of personal hygiene habits, such as wearing masks and isolating patients; however, there are not many people who provided detail regarding these aspects. a few respondents described these strategies on the government, organization, and individual levels. most of them referred to "masks" and "be far away from the cough" in the relevant description and noted details of whether to use a special mask or separate the patient from the family. for example, interviewee #6 stated: "if it is a more serious situation, we will wear a mask, and then the hospital will be more nervous about the flu . . . " another 18% of the respondents believed that there was no need to isolate the suspected patients, such as interviewee #19: "you cannot go to the hospital first because most of the cases are not true flu, to the hospital may be isolated, so look first." for the government's decision-making, 57% of the respondents cited health education and counseling. most of them were willing to accept the necessary emergency response; over 1/4 of the respondents referred to influenza surveillance, public disinfection, and hospital treatment. these answers demonstrate these students still make mistakes and lack of understanding of the most effective protection decisions, although they have better educational backgrounds and a high degree of potential coordination. moreover, although vaccination is the most effective way to prevent the flu, only two of the respondents said they were willing to receive the flu vaccine, and the other respondents said they would not vaccinate themselves if it were not compulsory. "there is no need for voluntary vaccination" (respondents 3 and 17), "some vaccines may have side effects . . . it will hurt me" (interviewee 26). notably, interviewee #9, who originated from hong kong, was able to describe all the individual and government contingency strategies and discussed his own experience of avian influenza in hong kong in addition to elaborating on the entire process of emergency work. this fully embodies the maturity and perfection of the hong kong government in the risk communication of emergencies and the higher risk awareness. thus, the related communication and publicity strategies are worth referencing. the risk of pandemic influenza can be reduced by timely warning, access to correct information, and attitudes towards communication and interest in the face of threats. as shown in table 5 , 50% of the respondents had a specific information identification ability; 43% of the respondents chose to obtain their information on pandemic risk from the official channels. all respondents were willing to take several methods to search for risk information, including using the internet. however, although knowing first-hand influenza warning and decision support information originates from the cdc, very few of the respondents (10%) were able to clarify what types of communicators can provide help and detailed descriptions on this topic, including the specific types of early warning information that is available, where the information is, and how it is transmitted. individuals have only mastered the general concept, such as interviewee #11: " . . . go to the official website or wechat (to) find how to prevent." for health education and publicity, most of the respondents indicated that they would not take the initiative to participate in similar activities. the reasons were "traditional lectures are boring," "the publicity manual was not attractive". moreover, as interviewee #25 indicated, "i think all of them are theoretical knowledge which can be seen on the internet. if they can tell us something that you need to deal with when an event comes, it would be better." regarding suggestions for future risk communication. most of the respondents were satisfied with the current government's work and had a positive attitude towards the emergency plan of the official guide form; they were more focused on "the details of the emergency work" (cited by 25% of the respondents) and "hope to get official plan" (cited by 21% of the respondents). for example, interviewee 20 indicated that " . . . the way must be change, not as before, because the flu is not like a common cold, people will not pay much attention to it. communication, whether it is a family or school, it is best to have some specific suggestions, such as how to wash hands and disinfection, everyone can refer to themselves to do it." in general, there was a clear difference in the breadth and depth of the overall understanding of the pandemic-related information and communication framework among each student interviewed. as expected, in the context of the communication framework, most of the students' mental models were not as rich as those of the experts. they were more concerned with the critical information necessary to make individual decisions in the interpretation of risk information, for example, interviewee #10 says: "now, i want to know what type of impact will it cause, and what type of protection measures can protect me?" most respondents only referred to the critical concepts in the communication framework, without a detailed description or in an inaccurate or unclear manner; therefore, these gaps may reduce the ability of people to manage their behavior and their compliance with expert opinions. compared to the communication framework in figure 2 , the respondents used personal experience and analogies to produce more related concepts to establish the information base they needed to make decisions. table 5 the discussion of related items complete negation of self-media 5 18% dialectical view of self-media 14 50% willing to participate in publicity 8 29% refusing to participate in publicity 20 71% access to information from the authority 12 43% access to information from other mass media 14 50% obtaining information from other trusted sources 13 46% differences between pandemic and seasonal influenza 4 14% the influence of rumors 7 25% note: table shows the concept of risk information mentioned by respondents and their suggestions on current government risk communication. the infection of flu often brings many complications, in the heart and lung systems, to those who have low immunity, such as infants and young children, and these are also the significant causes of virus' potential lethality. 30, 32 the interview results show that some students do not pay sufficient attention to the impact of pandemic influenza and remain optimistic, particularly the lethality of virus, serious complications, and identification of vulnerable populations. our respondents trust in the country's sound epidemic prevention system. however, because we still have a lot of unknown information to explore about the virus, the outbreak of a new virus often brings challenges to the health system of a region. for example, virus identification, targeted program formulation, and information release all need time. for the existence of these time lags between case generation and interventions, if we want to carry out successful disease control actions, it is more important for the public to actively carry out personal protection rather than passively wait for the intervention of government departments. moreover, the desalination of the history of the epidemic, and the lack of targeted health education may also be reasons for the over-optimism of the pandemic. consequently, those who have inaccurate risk perception will estimate themselves as "the strongest young people" or "a person who having enough understanding about the flu." once a new virus outbreaks, these people may also bring misleading information to other individuals in their social circle, which will affect others' emergency decisions. in particular, for those who have experienced influenza pandemic without being negatively affected, luck may cause them to have a more positive response to future pandemics. 33, 34 furthermore, although the h1n1, h5n1, and other influenza outbreaks have been derived from new viruses following mutation, the repetition of the old virus and the prevention of the flu season risk becoming a pandemic. 31 being able to distinguish the key differences between the pandemic and common flu can effectively improve the level of personal risk cognition. among the respondents, we found that some students remained confusion: they believed that a pandemic is the mass spread of seasonal influenza or a pandemic is an almost impossible "super calamity". moreover, a pandemic is often unpredictable and generally involves international outbreak. therefore, it is important for the public to understand that the pandemic is not far away from us. we need to pay attention to our own prevention during the flu season, and at the same time, we need to be alert to unusual cold symptoms, especially when we go abroad. otherwise, patients may mistakenly think that they are suffering from common influenza, choose to place or take medicine, thus delaying the diagnosis and treatment time, infecting others and causing serious consequences. finally, concerning vaccination, our respondents have negative views regarding this issue. only 2 of the 28 respondents cited the importance of the vaccine and had a history of active vaccination, and the reasons mainly focused on the conventional "i feel good and don`t need vaccination" and "doubts about the safety of vaccines." therefore, our risk communication at present seems inadequate in promoting the necessity of vaccination. the public is not aware of the importance of the vaccine for influenza prevention or the misperceptions caused by its one-sided understanding of the pandemic, as discussed in "the countermeasures of the pandemic". in an investigation of the willingness of the elderly to be vaccinated, shaoliang geng 35 found that the primary sources of influenza and related knowledge in elderly adults were family, relatives, friends, and television, and the most trusted means of knowledge were doctors. there are cracks in clinical and public health knowledge, and patients lack knowledge about the importance of vaccination. the correction of this misunderstanding is vital for college students and because it can promote the dissemination of inoculation knowledge of young students in the family, thus improving the injection of the recommended groups (old people and young children). as discussed in the acquisition of risk information and public suggestion, in the absence of relevant knowledge and information, the respondents applied personal experiences and analogies to compose the foundation of their mental model and help themselves understand the risk of the pandemic. understanding differences in causality between risk factors can also lead to substantial differences in risk perception and coping between individuals. 33 many students only know a few general concepts and have not formed a complete emergency preparedness mode of thinking in a communication framework, knowing what one can do during the pandemic but not much about what to do and what is truly meaningful. for example, although nearly all respondents cited wearing masks and bringing in patients in time for medical treatment, the most basic measures can be limited in the presence of a real pandemic, which is only a result of a personal experience analogy (compared to a cold or related disease). what ` s more, for those in the outbreak area, especially those with suspected symptoms, it is the right and effective decision to stay at home and seek the help of local medical institutions to protect personal health than to conceal facts and escaping from outbreak area in panic. but none of our respondents know that. also, most respondents have only basic concepts (the government and the health department) regarding the types of communicators who provide the relevant risk information. these overly broad understandings may limit their ability to rapidly identify critical information or influence their knowledge of specific report under the threat of severe flu, mainly when their typical sources of information or communication channels are not available, or the necessary information is not provided. if the government is unable to offer exact messages or be out of protection from the spread of information. public trust in official authority may be reduced. students always prefer health education with new styles and systematic content. the appeal of traditional lectures and guideline books full of academic words is far less attractive, and it is hoped that the government will "reduce the over the generality of the description" and "release relevant data to increase persuasion" in future communication work. foltz's research confirms that it is necessary to use various mechanisms in the risk communication of emergencies. individuals with nonprofessional backgrounds tend to think in more specific terms, their vocabulary is less expansive, and subtle expressions cannot be well understood. bright colors and charts easily attract them. complex text information transmission will make people feel tired and irritable. 2 if possible, two student respondents also suggested organizing practical exercises, which they think is more helpful to deepen the impression and understand self-protection measures used to cope with the pandemic. information consistency is the decisive factor in understanding and perceiving personal risk. in terms of communication effectiveness, multiple sources of consistent messages are typically more effective than messages from a single source or with different contents. 36 the earlier the warning people receive and the higher the threat of information is, the higher the possibility that people take active preventive measures. therefore, the government department should incorporate the outbreak situational information and the proposed measures into influenza warnings, while maintaining the consistency of multiple communication messages. first, the results of this research reflect some misunderstanding in the respondents with a more prominent frequency: 1) influenza virus mutation and seasonal influenza have the potential to evolve into a pandemic, and the prevention of common influenza cannot be ignored. 2) the impact of an influenza pandemic is often unprecedented, and influenza virus infection can be lethal; in addition to severe cold symptoms, it also results in severe complications in patients. 3) influenza vaccination plays an active role in pandemic prevention and should be actively vaccinated, particularly children with low immunity and elderly adults, a vulnerable group. 4) for suspected patients in the family, the first choice is a social isolate, and it is very dangerous for family members to remain in close contact with their protection work. it is imperative for individuals to have common knowledge regarding influenza, the correct personal response and the degree of risk in our area for making the right decisions. therefore, we suggest that the government should put the above content as the focus of communication when communicating the risks related to the pandemic, or formulating the corresponding health education materials, so as to improve the compliance of the audience. on the other hand, the content of government risk communication should not be limited to medical advice. the public health department should develop a response plan for individuals and organizations. in terms of organization, a pandemic does not directly damage related facilities in contrast to many other catastrophic events. however, the regular work of employees within the organization will be affected. the absence of ill employees in central positions will have a severe impact on the regular operation of the organization. therefore, we need to develop a "continuous work plan" for these particular circumstances. the government should release relevant risk information on an influenza pandemic in the form of a preparation plan, or, use the network for distance health education or guiding emergency response work through local radio or television stations. finally, we should update the channels and methods of risk communication and health education. the government should strengthen the application of new media to adapt to young people's information acquisition preferences. in the form of communication, it can be gradually changed from traditional lectures to novel approaches, such as public welfare videos, songs, and scene construction experiences. moreover, scene effects can play an essential role in enhancing the personal experience because analogies are encountered in the event of a risk event to facilitate their correct risk assessment and response behavior. risk communication for public health emergencies the perception of risk risk communicaiton: a mental models approach rational choice and the framing of decisions a warning shot: influenza and the 2004 flu vaccine shortage the determinants of trust and credibility in environmental risk communication: an empirical study news influence on our pictures of the world health information on the internet: accessibility, quality, and readability in english and spanish? best practices in public health risk and crisis communication? risk perception and communication unplugged: twenty years of progress risk society. ho po wen translation acceptable risk the nature of explanation rating the risk know what to do if you encounter a flash flood": mental models analysis for improving flash flood risk communication and public decision making an integrated risk model of a drinking-water -borne cryptosporidiosis outbreak flash flood risks and warning decisions: a mental models' study of forecasters, public officials, and media broadcasters in application of delphi method in screening self-rated health evaluation index system what do lay people want to know about the disposal of nuclear waste? a mental model approach to the design and development of an online risk communication validity in the qualitative research interview the competent manager: a model for effective performance delphi method and its application in medical research and decision making research on the structure of public risk communication ability of influenza pandemic in health sector coordination coefficient w test and its spss implementation influenza century: review and enlightenment of influenza pandemic in the 20th century discrete logistic dynamic model and its parameter identification for the ebola epidemic modern epidemiology methods and applications beijing: beijing medical university peking union medical college joint publishing house research on monitoring and evaluation index system of national essential medicine system in primary health care institutions. hubei: hua zhong university of science and technology analysis of the clinical characteristics of influenza a (h1n1) how does the general public evaluate risk information? the impact of associations with other risks prevalence and characteristics of children at increased risk for complications from influenza analysis of the information demand characteristics of public health emergencies of infectious diseases investigation on knowledge and willingness of influenza vaccination among the elderly over 60 years old in xuchang city social and hydrological responses to extreme precipitations: an interdisciplinary strategy for post-flood investigation the authors would like to acknowledge linxian wang for helping compiling interview questionnaires, making suggestions on interview skills and finding supporting documents. we also express the sincere gratitude to students involved in the interviews of this research. this research did not involve any experiments or investigation which need ethical approval, and did not receive any specific funding too. the authors report no conflicts of interest for this work. risk management and healthcare policy is an international, peerreviewed, open access journal focusing on all aspects of public health, policy, and preventative measures to promote good health and improve morbidity and mortality in the population. the journal welcomes submitted papers covering original research, basic science, clinical & epidemiological studies, reviews and evaluations, guidelines, expert opinion and commentary, case reports and extended reports. the manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. key: cord-034832-uvjjmt1p authors: shi, yong; zheng, yuanchun; guo, kun; jin, zhenni; huang, zili title: the evolution characteristics of systemic risk in china’s stock market based on a dynamic complex network date: 2020-06-02 journal: entropy (basel) doi: 10.3390/e22060614 sha: doc_id: 34832 cord_uid: uvjjmt1p the stock market is a complex system with unpredictable stock price fluctuations. when the positive feedback in the market amplifies, the systemic risk will increase rapidly. during the last 30 years of development, the mechanism and governance system of china’s stock market have been constantly improving, but irrational shocks have still appeared suddenly in the last decade, making investment decisions risky. therefore, based on the daily return of all a-shares in china, this paper constructs a dynamic complex network of individual stocks, and represents the systemic risk of the market using the average weighting degree, as well as the adjusted structural entropy, of the network. in order to eliminate the influence of disturbance factors, empirical mode decomposition (emd) and grey relational analysis (gra) are used to decompose and reconstruct the sequences to obtain the evolution trend and periodic fluctuation of systemic risk. the results show that the systemic risk of china’s stock market as a whole shows a downward trend, and the periodic fluctuation of systemic risk has a long-term equilibrium relationship with the abnormal fluctuation of the stock market. further, each rise of systemic risk corresponds to external factor shocks and internal structural problems. the stock market is a typical complex system with multiple stock prices fluctuating from equilibrium to deviation and to equilibrium again. a large number of heterogeneous investors buy and sell stocks frequently, making the relationships between different stocks unpredictable. in most scenarios, owing to some factors like herd effect, investors' investment strategies converge [1, 2] ; when some investors buy a stock, other investors tend to buy the same one, and furthermore, when the vast majority of investors buy or sell stocks, other investors usually follow this action. at the same time, different listed companies are another heterogeneous agent in the stock market. on the one hand, the economic exchanges between listed companies will lead to the linkage of their stock prices. on the other hand, similar actions by investors on similar stocks can cause herd behavior between different stock prices. when the prices of a large number of stocks in the market tend to be consistent, it means that the herd effect in the market is higher, and the stock market is more likely to fluctuate excessively and consistently, leading to higher market systemic risks [3, 4] . in former studies, the capital asset pricing model (capm) framework was usually used to analyze financial systematic risks as a basic theory [5] [6] [7] [8] . according to capm, risks can be divided into systematic risk (or market risk) and non-systematic risk, while the latter can be diminished through investment portfolios. the systematic risk often refers to pervasive, far-reaching, perpetual market risk, which can be measured by the variance of the portfolio (beta) altogether. therefore, most studies on systematic risk are based on beta values. although this theory is widely adopted, it usually comes with a number of hypotheses, such as homogenous investors in capital markets. however, in modern financial markets, different investors generally have different degrees of rationality, ability to obtain information, and sensitivity to prices, that is, investors are usually heterogenous. hence, capm may not be a reasonable model in the real complex world [9, 10] . more importantly, this paper focuses on the systemic risk, which reflects the stability of the system and the characteristics of risk transmission among individuals in a certain complex system. a complex network, which is based on physics and mathematics theory, can tackle complicated practical problems [11] . it is especially suitable for modeling, analysis, and calculation in complex finance systems [12] . nowadays, the literature on applying complex networks to finance is growing in size, and complex networks have become important tools in the finance field [13] . after 30 years of development, china's stock market is growing in scale and vitality, while the market operation mechanism and management system are constantly improving. nevertheless, there have been several typical bear and bull markets in recent years, and systemic risk in the stock market has risen periodically. therefore, a dynamic complex network of individual stocks in china's stock market is constructed in this paper to measure the dynamic systemic risk of china's stock market. then, the tendency evolution and cycle change characteristics of systemic risk are explored. the structure of this paper is as follows. section 2 summarizes the applications of complex networks in the field of economy and finance; section 3 introduces the data and methodology used in this paper; section 4 proposes the empirical results and analysis; and the conclusions and some discussion are given in section 5. construction of the network consists of two important steps, defining nodes and defining edges. in previous studies, nodes are usually represented by different agents in the financial market, that is, stocks or bonds, and edges are symbolized by the relationship between such agents. pearson's correlation coefficient is the most common and easiest way to measure the correlation between two entities in the financial market [14] [15] [16] [17] [18] [19] [20] . for example, mcdonald et al. used pearson correlation coefficient to construct a currency-related network in the global foreign exchange market and obtained temporary dominant or dependent currency information [16] . in addition, other correlation coefficients, such as spearman rank-order correlation coefficient [21] , multifractal detrended cross-correlation analysis (mfcca) [22, 23] , multifractal detrended fluctuation analysis (mfdca) [24] , and cophenetic correlation coefficient (ccc) [25] , have also been put forward. furthermore, correlation can also be defined by some econometric methods, such as the granger causality test [26, 27] , cointegration test [28] , dynamic correlation coefficient with garch (dcc-garch) [29] , and so on. after the definition of edges, some filter methods for choosing the important edges should be applied. otherwise, the complex network will be very large and complicated, which is not conducive to subsequent analysis. minimum spanning tree (mst) can be used for this purpose. after mst operation, the complex network will retain only n − 1 edges, where n is the number of the nodes, which greatly facilitates the study of the network topology. at present, mst is most commonly used to simplify the financial complex network [14, 15, [18] [19] [20] 22, [29] [30] [31] . for example, in 1999, mantegna first proposed that mst could be used to search for important edges in the stock market network, and a stock market topology with economic significance could be obtained [14] . except for mst, other 90 days and 1 day for step. then, the average weight and structural entropy of the network in each day can be obtained. the ratio of the stocks with weight in the top 10% to the average weight in each window period can also be calculated and defined as the concentration ratio of important stock. therefore, four network indexes could be derived. next, these four network indexes were combined with the stock market index and 0-1 standardization before empirical mode decomposition (emd) was performed. through the above process, the original sequences were divided into a number of intrinsic mode functions (imfs). then, the results were reconstructed with grey relational analysis (gra), making each sequence have three items, that is, tendency, cycle, and disturbance. finally, the statistical analysis of the three components was conducted in order to explore the development of china's stock market and the evolution characteristics of systemic risk. modeling of the complex network, emd, and gra is introduced as follows. window period can also be calculated and defined as the concentration ratio of important stock. therefore, four network indexes could be derived. next, these four network indexes were combined with the stock market index and 0-1 standardization before empirical mode decomposition (emd) was performed. through the above process, the original sequences were divided into a number of intrinsic mode functions (imfs). then, the results were reconstructed with grey relational analysis (gra), making each sequence have three items, that is, tendency, cycle, and disturbance. finally, the statistical analysis of the three components was conducted in order to explore the development of china's stock market and the evolution characteristics of systemic risk. modeling of the complex network, emd, and gra is introduced as follows. a complex network consists of several nodes and edges linking them. the node is the basic element of a complex network, which is the abstract expression of an "individual" in the real world. the edge is an expression of the relationship between the elements and can be given weight according to the extent of the relationships. here, represents the weight of the edge linking node and node , where , = 1,2,3, . . . , and is the number of nodes in a certain network. for an undirected network, (1) we can also use the weighted degree to represent the importance of nodes, which is defined as where ( ) is the set of nodes linking to node . the larger the weighted degree, the stronger the degree of correlation with other nodes and the more important the node. we use the return rates of a-share stocks on china's stock market as the network nodes and construct the network using correlation coefficient as the edge weight. a complex network consists of several nodes and edges linking them. the node is the basic element of a complex network, which is the abstract expression of an "individual" in the real world. the edge is an expression of the relationship between the elements and can be given weight according to the extent of the relationships. here, w ij represents the weight of the edge linking node i and node j, where i, j = 1, 2, 3, . . . , n and n is the number of nodes in a certain network. for an undirected network, we can also use the weighted degree to represent the importance of nodes, which is defined as where v (i) is the set of nodes linking to node i. the larger the weighted degree, the stronger the degree of correlation with other nodes and the more important the node. we use the return rates of a-share stocks on china's stock market as the network nodes and construct the network using correlation coefficient ρ ij as the edge weight. here, {x it , i = 1, 2, · · · , n; t = 1, 2, · · · , t} is the original stock return rates data and < · · · > indicates a time-average over the t data points for each time series. after we get w ij , we calculate the average weight, top 10 nodes weight, and concentration ratio below: top 10 nodes = 1 10 concentration ratio = top 10 nodes average weight (6) where top (i) means the nodes i with the top 10 weights (dw i ). furthermore, we calculate the network's structural entropy, which is often used to measure the complexity of the complex network system [37] . however, as the structural entropy of the all-connected network is constant, it is meaningless for our analysis, so we need to remove the edge of weak correlation to get a non-all-connected network for calculating the structural entropy. the threshold value of the correlation coefficient is set at 0.4. if the absolute value of the correlation coefficient ρ ij , that is, w ij , is less than 0.4, this edge will be cut off, and we will get a non-fully connected network to calculate the structural entropy e deg under each window [37] : where n is the total number of nodes in the network; k is boltzmann's constant; and p i can be calculated by the number of edges connecting to node i, namely, the degree of node i: combining the three network indexes with china's stock market index gives four input data, named as {y kt , k = 1, 2, 3, 4; t = 1, 2, · · · , t}. y kt have to be 0-1 standardized, owing to significant differences at the numerical level, that is, for the signal z(t), the upper and lower envelopes are determined by local maximum and minimum values of the cubic spline interpolation. m 1 is the mean of envelopes. subtracting m 1 from z(t) yields a new sequence h 1 . if h 1 is steady (does not have a negative local maximum or positive local minimum), it is denoted as the intrinsic mode function (im f 1 ). if h 1 is not steady, it is decomposed again, until steady series is attained, which is denoted as im f 1 . then, m 1 replaces the original z(t) and m 2 is the mean of the envelopes of m 1 , and m 1 is similarly decomposed. repeating these processes k times gives im f k , that is, finally, let res denote the residual of z(t) and all im f s: where im f s and res could be extracted for the gra process. the grey relational analysis was first put forward by deng j l in 1989 [38] . his grey relational degree model, which is usually called the grey relative correlation degree, mainly focused on the influence of distance between points in the system. the grey relative correlation degree formula is given by equation (12). where d i (t) is the reference series; d j (t) is the compared series; and ρ is the distinguishing coefficient, which is usually equal to 0.5. in order to overcome the weakness of the grey relative correlation degree, the absolute correlation degree was proposed by mei (1992) [39] . the formula is given by equation (13). considering the weakness and strength, we used the grey comprehensive relational degree to classify the noise terms and market fluctuation terms. the formula of the grey comprehensive relational degree is given by equation (14): where β is the weight of the grey relative relational degree, which is valued as 0.5. figure 2a compares the three average weight related indicator of the dynamic complex network with the dynamic evolution of the shanghai composite index standardized by setting it as 1000 on the first trading day of 1997. it can be seen that the average weight of the complex network and the average weight of the top 10 stocks have strong synchronization, with a high correlation of 0.9896. therefore, both of them can be used as proxy indicators of systemic risk. however, the concentration ratio is not consistent with the overall systemic risk. the concentration of risk is relatively low when the systemic risk is high, which means the risk is relatively decentralized. furthermore, the concentration ratio and the average weight are significantly negatively correlated with a correlation coefficient of −0.91329. in this way, we will focus on using the index of the average weight to measure the systemic risk of the chinese stock market. it can also be seen from figure 2a that, although there is a correlation between two average weight indexes (all and top 10) and the stock index, the coefficients, −0.1370 and −0.0829, are relatively small. this proved that the level of systemic risk is not determined by the move of overall price trend. in order to further investigate the relationship between the systemic risk represented by the average weight, the beta value (β) obtained by the capm model, and the stock average variance (v), we estimated β t and v t as follows: where n is the total number of stocks; t is the length of the sliding window; r f is the risk-free interest rate, which was set to 3%; x kt is the return of the kth stock in the sliding window t; y t is the return of the stock index, which is symbolized for market return and is represented by 000001.sh; β kt is calculated by mls with y t and x kt ; e kt is the error term; β t is the average of all individual stocks' beta; and v t is the average variance of all stocks in sliding window t. in figure 2b , we compare the systemic risk with beta and stock variance, finding that these three have different moving trends, which shows that our systemic risk index can catch unique market fluctuations. furthermore, the systemic risk index was ahead of beta in several stages, such as from june 2006 to july 2008 or from july 2015 to august 2017, which shows that our systemic risk index has a certain risk pre-warning ability. it can also be seen from figure 2a that, although there is a correlation between two average weight indexes (all and top 10) and the stock index, the coefficients, −0.1370 and −0.0829, are relatively small. this proved that the level of systemic risk is not determined by the move of overall price trend. in order to further investigate the relationship between the systemic risk represented by the average weight, the beta value ( ) obtained by the capm model, and the stock average variance ( ), we further compared the systemic risk represented by average weight with the volatility index (vix) of china and the u.s. stock market. considering the chinese vix cannot cover the above research range, the u.s. vix was selected for comparison purposes. the correlation coefficient between the two vix in this range is significantly positive, but the coefficient is only 0.5626. figure 3a presents the great differences in the trend of vix between china and the united states. it can be seen that the correlation coefficient between average weight and chinese vix is 0.4763 during the interval since the chinese vix launched. it is noteworthy that the volatility index leads the systemic risk index to a certain extent. this is confirmed by the results obtained from the cross-correlation analysis with the maximum coefficient of 0.7469, corresponding to lags of 55 days (which means current systemic risk is highly related to the vix from 55 days prior). however, this is mainly because the systemic risk index constructed in this paper was compiled using the sliding window method, with the window length of 90 days, so the systemic risk index of a certain time, t actually represents the systemic risk of the previous 90 days. between the two vix in this range is significantly positive, but the coefficient is only 0.5626. figure 3a presents the great differences in the trend of vix between china and the united states. it can be seen that the correlation coefficient between average weight and chinese vix is 0.4763 during the interval since the chinese vix launched. it is noteworthy that the volatility index leads the systemic risk index to a certain extent. this is confirmed by the results obtained from the crosscorrelation analysis with the maximum coefficient of 0.7469, corresponding to lags of 55 days (which means current systemic risk is highly related to the vix from 55 days prior). however, this is mainly because the systemic risk index constructed in this paper was compiled using the sliding window method, with the window length of 90 days, so the systemic risk index of a certain time, , actually represents the systemic risk of the previous 90 days. in fact, the complex network characteristics of individual stocks are effective at reflecting the systemic risk of the market. to verify this, we calculated the 90-day averages for vix, which are shown in figure 3b . it can be seen that the systemic risk index constructed in this paper is consistent with the 90-day average trend for china's vix, and the systemic risk is ahead of china's vix after 2017 and is more sensitive, which proves the effectiveness of the systemic risk index derived from the complex network. figure 4 shows the comparison between the structural entropy and the number of nodes in a complex network. it can be seen that the structural entropy is highly correlated with the number of nodes, and the correlation coefficient reaches 0.9302. in other words, the increase in system complexity of china's stock market is mainly caused by the increase in the number of listed companies. nevertheless, we can also find that, in addition to the overall upward trend, structural entropy also has periodic fluctuations. therefore, multi-scale analysis is required to determine whether the system complexity represented by structural entropy is related to systemic risk. in fact, the complex network characteristics of individual stocks are effective at reflecting the systemic risk of the market. to verify this, we calculated the 90-day averages for vix, which are shown in figure 3b . it can be seen that the systemic risk index constructed in this paper is consistent with the 90-day average trend for china's vix, and the systemic risk is ahead of china's vix after 2017 and is more sensitive, which proves the effectiveness of the systemic risk index derived from the complex network. figure 4 shows the comparison between the structural entropy and the number of nodes in a complex network. it can be seen that the structural entropy is highly correlated with the number of nodes, and the correlation coefficient reaches 0.9302. in other words, the increase in system complexity of china's stock market is mainly caused by the increase in the number of listed companies. nevertheless, we can also find that, in addition to the overall upward trend, structural entropy also has periodic fluctuations. therefore, multi-scale analysis is required to determine whether the system complexity represented by structural entropy is related to systemic risk. figure 5 presents the emd results of the standardized systemic risk index, structural entropy, and stock price index, respectively. it can be seen that the two original sequences are divided into seven imfs and one residual term, among which the residual term can represent the overall trend of indexes' evolution to a certain extent, while the imf of lower frequency can describe the periodic fluctuation of indexes in different time scales, and the imf of highest frequency represents the stochastic perturbation. figure 5 presents the emd results of the standardized systemic risk index, structural entropy, and stock price index, respectively. it can be seen that the two original sequences are divided into seven imfs and one residual term, among which the residual term can represent the overall trend of indexes' evolution to a certain extent, while the imf of lower frequency can describe the periodic fluctuation of indexes in different time scales, and the imf of highest frequency represents the stochastic perturbation. through emd, it can be found that the residual term, also known as the trend term, decomposed by structural entropy, represents the growth in the number of network nodes, and the correlation coefficient between this residual term and the number of network nodes can be further improved to 0.9428. when removing the trend term from the original sequence and comparing it to the systemic risk series represented by the average weight, as shown in figure 6 , the highly consistent fluctuations between the two series can be seen, and the correlation coefficient of the two reaches 0.7572. therefore, adjusted structural entropy, that is, removing the trend term of the network size, can also measure the systemic risk. nevertheless, owing to the high correlation between these two series, the following analysis only focuses on the systemic risk represented by average weight. 0.9428. when removing the trend term from the original sequence and comparing it to the systemic risk series represented by the average weight, as shown in figure 6 , the highly consistent fluctuations between the two series can be seen, and the correlation coefficient of the two reaches 0.7572. therefore, adjusted structural entropy, that is, removing the trend term of the network size, can also measure the systemic risk. nevertheless, owing to the high correlation between these two series, the following analysis only focuses on the systemic risk represented by average weight. in order to further observe the systemic risk evolution of the chinese stock market, several imfs and residual terms obtained from emd decomposition were combined using the method of grey correlation degree. figures 7 and 8 present the trend term, cycle term, and random term of systemic risk (average weight) and the stock price index. then, we focused on the overall trend change and cycle fluctuation of systemic risk in china's stock market. in order to further observe the systemic risk evolution of the chinese stock market, several imfs and residual terms obtained from emd decomposition were combined using the method of grey correlation degree. figures 7 and 8 present the trend term, cycle term, and random term of systemic risk (average weight) and the stock price index. then, we focused on the overall trend change and cycle fluctuation of systemic risk in china's stock market. 0.9428. when removing the trend term from the original sequence and comparing it to the systemic risk series represented by the average weight, as shown in figure 6 , the highly consistent fluctuations between the two series can be seen, and the correlation coefficient of the two reaches 0.7572. therefore, adjusted structural entropy, that is, removing the trend term of the network size, can also measure the systemic risk. nevertheless, owing to the high correlation between these two series, the following analysis only focuses on the systemic risk represented by average weight. in order to further observe the systemic risk evolution of the chinese stock market, several imfs and residual terms obtained from emd decomposition were combined using the method of grey correlation degree. figures 7 and 8 present the trend term, cycle term, and random term of systemic risk (average weight) and the stock price index. then, we focused on the overall trend change and cycle fluctuation of systemic risk in china's stock market. for the long-term tendency, we found that the overall trend of the stock price rose steadily, while the systemic risk has been declining slowly throughout the evolution of the chinese stock market since 1997. this means that, although there is still phased systemic risk in the chinese stock market, the overall level of systemic risk is declining as the operating mechanism and related regulations are constantly improving. for the long-term tendency, we found that the overall trend of the stock price rose steadily, while the systemic risk has been declining slowly throughout the evolution of the chinese stock market since 1997. this means that, although there is still phased systemic risk in the chinese stock market, the overall level of systemic risk is declining as the operating mechanism and related regulations are constantly improving. for the cycle fluctuation, the rise of systemic risk is usually caused by the joint action of external shocks and internal operations, which is manifested in the excessive rise and fall in the stock market. therefore, the cyclical characteristics of systemic risk have no direct relationship with the fluctuations of the stock market. thus, we converted the cycle fluctuation of the stock market into the difference from the price mean using (18) . considering that cycle_abs_stock and cycle_risk are both non-stationary, we calculated their firstorder differences. the results of augmented dickey-fuller (adf) tests show that both variables are an integrated of order one. therefore, cointegration tests can be proposed on the original sequences. the results of johnson trace tests show that there are at least two cointegration relationships between the two variables, which confirms that there is a long-term equilibrium relationship between stock price volatility and systemic risk. the equilibrium equation is all the coefficients are significant at the 5% significance level, so the volatility of the stock market is positively related to systemic risk from the perspective of long-term equilibrium, which means that, while the stock price deviates from the theoretical value of equilibrium, the systemic risk will be at a high level. in figure 9 , when the blue line is above 0, the systemic risk is large, while when the blue line is below 0, the systemic risk is small. the red line represents the absolute value of stock price movements, and the red line is clearly ahead of the above-zero parts of the blue line. for the cycle fluctuation, the rise of systemic risk is usually caused by the joint action of external shocks and internal operations, which is manifested in the excessive rise and fall in the stock market. therefore, the cyclical characteristics of systemic risk have no direct relationship with the fluctuations of the stock market. thus, we converted the cycle fluctuation of the stock market into the difference from the price mean using (18) . considering that cycle_abs_stock and cycle_risk are both non-stationary, we calculated their first-order differences. the results of augmented dickey-fuller (adf) tests show that both variables are an integrated of order one. therefore, cointegration tests can be proposed on the original sequences. the results of johnson trace tests show that there are at least two cointegration relationships between the two variables, which confirms that there is a long-term equilibrium relationship between stock price volatility and systemic risk. the equilibrium equation is all the coefficients are significant at the 5% significance level, so the volatility of the stock market is positively related to systemic risk from the perspective of long-term equilibrium, which means that, while the stock price deviates from the theoretical value of equilibrium, the systemic risk will be at a high level. in figure 9 , when the blue line is above 0, the systemic risk is large, while when the blue line is below 0, the systemic risk is small. the red line represents the absolute value of stock price movements, and the red line is clearly ahead of the above-zero parts of the blue line. entropy 2020, 22, x for peer review 13 of 16 figure 9 . cycle evolution of systemic risk. figure 9 shows the cycle evolution of systemic risk, the lead-lag relationship between systemic risk and stock volatility is dynamic owing to the sliding window processing. from the perspective of the whole cycle evolution, we found that there were several periods of high systemic risk in the chinese stock market since 1997, as described below. 1997-1998: the stock market was in a shock stage during this period. on the one hand, the chinese stock market was impacted by external factors such as the asian financial crisis; on the other hand, the operating mechanism at that time was not perfect enough, with frequent insider trading and market manipulation. the systemic risk was at a high level during this period and, therefore, the stock market began to comprehensively reform its trading mechanism in 1998. although the market fluctuation was not violent from the current perspective, it actually contained many factors causing systemic risk. 2001-2002: the stock market was in a declining bear stage during this period. owing to the poor performance of high-tech companies, resulting from the burst of the global internet bubble and the launch of the policy reducing the state-owned shares holding of listed companies, the stock market had a big crash in china. related departments issued a series of favorable strategies such as reducing interest rates and trading commissions; however, the imperfection of the market led to a number of "black markets", which brought a high systemic risk. 2007-2008: this period includes both excessive rise and fall of the market. the reform of nontradable shares in 2005, together with a series of positive policies such as the entry of insurance funds and the appreciation of renminbi (rmb), promoted the rise of the chinese stock market. however, a lot of speculation by inexperienced individual investors caused a more and more serious herding effect, and the systemic risk was maintained at a high level for a long time. followed by the global financial crisis brought by the u.s. subprime crisis, with the launch of stock index futures, the chinese stock market began to reverse to a bear stage, and the systemic risk in this stage also remained at a high level. 2011-2012: this stage was another volatile bear market; the fluctuation of stock price was much smaller than that of the previous stage, but the systemic risk still remained at a similar level. even though the chinese economy maintained a high growth rate during this period, the stock market was influenced by the global financial markets, as well as the european debt crisis. the low volatility of the stock market still contained large systemic risks, which were reinforced by the frequent occurrence of black swan events such as a rear-end collision of bullet trains, clenbuterol, and so on. figure 9 shows the cycle evolution of systemic risk, the lead-lag relationship between systemic risk and stock volatility is dynamic owing to the sliding window processing. from the perspective of the whole cycle evolution, we found that there were several periods of high systemic risk in the chinese stock market since 1997, as described below. 1997-1998: the stock market was in a shock stage during this period. on the one hand, the chinese stock market was impacted by external factors such as the asian financial crisis; on the other hand, the operating mechanism at that time was not perfect enough, with frequent insider trading and market manipulation. the systemic risk was at a high level during this period and, therefore, the stock market began to comprehensively reform its trading mechanism in 1998. although the market fluctuation was not violent from the current perspective, it actually contained many factors causing systemic risk. 2001-2002: the stock market was in a declining bear stage during this period. owing to the poor performance of high-tech companies, resulting from the burst of the global internet bubble and the launch of the policy reducing the state-owned shares holding of listed companies, the stock market had a big crash in china. related departments issued a series of favorable strategies such as reducing interest rates and trading commissions; however, the imperfection of the market led to a number of "black markets", which brought a high systemic risk. 2007-2008: this period includes both excessive rise and fall of the market. the reform of non-tradable shares in 2005, together with a series of positive policies such as the entry of insurance funds and the appreciation of renminbi (rmb), promoted the rise of the chinese stock market. however, a lot of speculation by inexperienced individual investors caused a more and more serious herding effect, and the systemic risk was maintained at a high level for a long time. followed by the global financial crisis brought by the u.s. subprime crisis, with the launch of stock index futures, the chinese stock market began to reverse to a bear stage, and the systemic risk in this stage also remained at a high level. 2011-2012: this stage was another volatile bear market; the fluctuation of stock price was much smaller than that of the previous stage, but the systemic risk still remained at a similar level. even though the chinese economy maintained a high growth rate during this period, the stock market was influenced by the global financial markets, as well as the european debt crisis. the low volatility of the stock market still contained large systemic risks, which were reinforced by the frequent occurrence of black swan events such as a rear-end collision of bullet trains, clenbuterol, and so on. 2015-2016: the market price was rising rapidly in 2015 and the systemic risk was also in a climbing stage. however, a high level of risk still appeared in 2016, which was a stage of rapid and frequent fluctuations. the issuing scale of new stocks increased significantly, driving frequent market shocks such as thousands of shares rising or falling together, two triggering circuit breaker events in a day, and so on. thus, the overall capital presents a large-scale net outflow, and the investor sentiment fluctuates abnormally. to summarize, the systemic risk of the stock market will significantly increase in the irrational stages of rise, fall, and frequent shocks. however, extremely high systemic risk is more likely in the cases of collapse and frequent shocks. complex networks have been widely used in the field of socio-economic analysis. most of them focus on the risk contagion of banks and international economic or trade exchanges; however, studies on the stock market are limited. in fact, a complex network provides an important tool for the study of the stock market, which is a self-organizing complex system with multi-agent interactions. the average weight of the complex network can be used to measure the aggregation of positive feedback in the market, so as to measure the overall systemic risk. on the basis of the data of all a-shares in china, this paper constructs a dynamic complex network of stock correlation, and the change of average weight as well as adjusted structural entropy of the network are used to measure the evolution of systemic risk in china's stock market. although, owing to the use of a sliding window, the average weight or structural entropy in fact presents the average systemic risk level in the past 90 days, it also reflects the evolution of systemic risk in china's stock market for more than 20 years as a whole. the results show that the systemic risk of china's stock market shows a downward trend on the whole, which is closely related to the continuous improvement of the management system and operation mechanism of the financial market. in addition, there is a long-term equilibrium relationship between the cycle fluctuation of systemic risk and the excessive fluctuation of the stock market. since 1997, the stages with high systemic risk have appeared with excessive increases, excessive falls, and frequent fluctuations of the stock market. meanwhile, it can also be seen from figure 1 that the global stock market began to fluctuate significantly under the influence of the novel coronavirus pneumonia. the chinese stock market is relatively stable at present, but the systemic risk has been climbing rapidly since the beginning of february. therefore, we must be alert to the further expansion of the systemic risk of the chinese stock market under the double impact of internal and external factors. herd behavior and investment herd behavior in financial markets the low-volatility anomaly: market evidence on systematic risk vs. mispricing. financ unobservable systematic risk, economic activity and stock market variance and lower partial moment measures of systematic risk: some analytical and empirical results systematic risk in emerging markets: the d-capm time varying capm betas and banking sector risk determinants of systematic risk an introduction to econophysics: correlations and complexity in finance physical approach to complex systems analyzing. and modeling real-world phenomena with complex networks: a survey of applications complex networks in finance networks in economics and finance in networks and beyond: a half century retrospective hierarchical structure in financial markets dynamics of market correlations: taxonomy and portfolio analysis detecting a currency's dominance or dependence using foreign exchange network trees characteristic analysis of complex network for shanghai stock market a network perspective of the stock market systemic risk and hierarchical transitions of financial networks the dynamic evolution of the characteristics of exchange rate risks in countries along "the belt and road" based on network analysis degree stability of a minimum spanning tree of price return and volatility minimum spanning tree filtering of correlations for varying time scales and size of fluctuations fuzzy entropy complexity and multifractal behavior of statistical physics financial dynamics characterizing emerging european stock markets through complex networks: from local properties to self-similar characteristics structure and response in the world trade network time and frequency structure of causal correlation networks in the china bond market econometric measures of connectedness and systemic risk in the finance and insurance sectors cointegration-based financial networks study in chinese stock market does network topology influence systemic risk contribution? a perspective from the industry indices in chinese stock market analysis of a network structure of the foreign currency exchange market topology of correlation-based minimal spanning trees in real and model markets a global network of stock markets and home bias puzzle complex networks in a stock market an approach to hang seng index in hong kong stock market based on network topological statistics explaining what leads up to stock market crashes: a phase transition model and scalability dynamics pathways towards instability in financial networks singular cycles and chaos in a new class of 3d three-zone piecewise affine systems introduction to grey system theory the concept and computation method of grey absolute correlation degree the authors declare no conflict of interest. the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. key: cord-011824-4ge9i90s authors: andrews, jack l.; foulkes, lucy e.; bone, jessica k.; blakemore, sarah-jayne title: amplified concern for social risk in adolescence: development and validation of a new measure date: 2020-06-23 journal: brain sci doi: 10.3390/brainsci10060397 sha: doc_id: 11824 cord_uid: 4ge9i90s in adolescence, there is a heightened propensity to take health risks such as smoking, drinking or driving too fast. another facet of risk taking, social risk, has largely been neglected. a social risk can be defined as any decision or action that could lead to an individual being excluded by their peers, such as appearing different to one’s friends. in the current study, we developed and validated a measure of concern for health and social risk for use in individuals of 11 years and over (n = 1399). concerns for both health and social risk declined with age, challenging the commonly held stereotype that adolescents are less worried about engaging in risk behaviours, compared with adults. the rate of decline was steeper for social versus health risk behaviours, suggesting that adolescence is a period of heightened concern for social risk. we validated our measure against measures of rejection sensitivity, depression and risk-taking behaviour. greater concern for social risk was associated with increased sensitivity to rejection and greater depressed mood, and this association was stronger for adolescents compared with adults. we conclude that social risks should be incorporated into future models of risk-taking behaviour, especially when they are pitted against health risks. adolescence is a sensitive period of development, characterised by significant changes in both the biological and social environment. in particular, adolescence is a time of social reorientation, greater susceptibility to peer influence and heightened sensitivity to social rejection [1] . adolescents are also stereotyped as risk takers, which is likely due to evidence that risk behaviours, such as binge drinking, risky driving and smoking, are heightened during this period of life [2, 3] . this commonly held perspective, that adolescence is a period of heightened risk taking, conceals a more nuanced reality. social context significantly affects adolescents' engagement in health risk behaviours. for example, evidence from car accidents shows that, for young drivers, the risk of engaging in a fatal car accident increases with the number of passengers in the car [4] . this is reflected in the experimental literature, with one study finding that, when playing alone, adolescents and adults take a similar number of risks on an incentivised computerised driving task (the stop light task). however, when adolescents played the same driving game in the presence of friends, they took significantly more risks, which was not the case for adults [5] . adolescents are also more likely to smoke, binge drink and take illicit substances with their peers, compared to when alone [6] . however, not all adolescents take risks, and recent work has led to the suggestion that adolescence is in fact a time of increased sensitivity to risk, characterised by wide variation in risk seeking and risk averse sit at the extreme end of concern for social risk, when the environmental cues potentially signal that one's social burden is significantly greater than one's social value. however, few studies have directly investigated whether adolescence is a period of heightened concern for social risk, and the extent to which concern for social risk predicts depressive symptomatology. current questionnaire measures of risk-taking behaviour do not uniformly include social risks as a risk-taking domain, and instead focus on the domains of health (e.g., taking illicit substances), financial (e.g., gambling) or legal (e.g., stealing) risk. one adult risk-taking questionnaire, the domain-specific risk-taking questionnaire (dospert) includes a social risk subscale, but this includes items that are not applicable to adolescent populations. for example, the social-risk items in this measure include 'approaching your boss to ask for a raise' and 'taking a job that you enjoy over one that is prestigious but less enjoyable' [22] . another issue with current questionnaire measures of risk taking is the conflation between health and social risk. many health risks carry with them some degree of social risk, e.g., smoking may carry with it both health and social risk considerations. further, it is unclear whether concerns about social risk are independent of concerns for other risk domains, such as health risk behaviours, so whether or not an individual's propensity to take risks is uniform across risk domains. given these issues, we developed and validated a measure of concern for health and social risk, which is suitable for both adolescents and adults. in this measure, we conceptualised a social risk as any behaviour that marks individuals as being different from their peers-for example, openly endorsing music that friends do not like, or befriending an unpopular peer. we attempted to isolate the social-risk items by including social risks that involve little or no obvious health risk. we conceptualised a health risk as risks to one's physical wellbeing, such as crossing a street on a red light. we included health risk behaviours that have as little conflation with social risk as possible. we had four primary hypotheses. we first hypothesised that concern for social risk would be distinct from health risk concerns. in order to establish this, we developed a measure using exploratory and confirmatory factor analysis (efa; cfa) to assess whether health and social risk domains are distinct constructs. second, and in order to validate our measure, we hypothesised that higher concern for social risk would be associated with greater sensitivity to rejection and lower mood. we hypothesised that this relationship would be stronger for adolescents compared with adults. third, we hypothesised that greater concern for each risk domain would be positively related to risk perception and negatively related to engagement in that risk domain. finally, we hypothesised that concern for social risk would decrease with age from early adolescence to late adulthood, relative to concern for health risk. sample 1 (exploratory factor analysis: efa; adults). participants (n = 500) were recruited from two sources: the university participant pool (n = 177) and prolific, an online participant recruitment and data collection platform (n = 323). participants (295 females, 204 males, one did not disclose gender) were aged 18-60 years (mean = 32.2, sd = 10.72). sample 2 (confirmatory factor analysis: cfa; adults). participants (n = 415) were recruited via prolific. participants (284 females, 129 males, two did not disclose gender) were aged 18-77 years (mean = 36.53, sd = 13.10). sample 3 (confirmatory factor analysis: cfa; adolescents). participants (n = 484) were recruited from schools in the greater london area, as part of ongoing research projects in our lab. participants (333 females, 107 males, four did not disclose gender) were aged 11-17 years (mean = 13.54, sd = 1.91). all participants were from the united kingdom and all completed the questionnaires online. ethical approval was obtained from the university ethics board (7199/001; 3453/001). participants were paid at a rate of approximately £10 per hour for their time. we developed a questionnaire measure in order to assess the degree to which adolescents and adults are concerned about engaging in health and social risk behaviours. given that many social risks also incur health risks, we developed items with as little conflation between the two as possible. we developed a list of social-risk items, e.g., "spend time with someone your friends don't like", and health risk items, e.g., "cross a main road when the crossing light is red". a panel of five researchers with expertise in adolescent social development reviewed an initial list of items and provided feedback on the content and suitability for individuals aged 11 and above, with the aim of making sure each item was distinct from the opposing type of risk. following this, a total list of 16 items was included in the scale validation: eight health and eight social (see table 1 ). in the version of the questionnaire given to participants, individuals were asked: "for each statement please rate how worried you would feel doing this behaviour. (if you have never done it, imagine how you would feel)." answers were given on a sliding scale from, "not worried at all (0)" to "very worried (100)". the questionnaire was administered online and the numbers (0-100) were visible along a slider (see supplementary materials for final questionnaire). all participants completed a number of additional measures in order to assess the construct validity of the hsrq. all participants included in the adult cfa completed each additional measure (n = 415). however, due to time constraints imposed by testing sessions, a subset of the participants in the adolescent cfa completed the rejection sensitivity (c-rsq; n = 207) and depressed mood (mfq; n = 281) measures only. questionnaire is a validated measure of sensitivity to actual or perceived rejection [23] . individuals were presented with nine scenarios such as "you approach a close friend to talk after doing or saying something that seriously upset him/her" and are asked to rate their rejection concern and level of acceptance expectancy. scores are computed by reversing the level of acceptance expectancy and multiplying this by the level of rejection concern. scores across the nine items are then averaged to create a total rejection sensitivity score; higher scores indicate higher rejection sensitivity. we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the a-rsq. adolescents: children's rejection sensitivity questionnaire (c-rsq). participants completed the anxious expectations subscale of the children's rejection sensitivity questionnaire, which is a valid measure of rejection sensitivity in children [24] . participants were presented with six scenarios and were asked to report on a scale of 0-6 their expected likelihood of the outcome of the scenario and how nervous they would be given the content of the scenario. their expected likelihood was multiplied by their nervous expectation for each scenario and then a mean score was derived across all items. higher scores relate to greater rejection sensitivity. we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the c-rsq. adults: personal health questionnaire depression scale (phq-8). the phq-8 is a validated eight item measure of depression [25] . participants were asked how often over the past two weeks they have experienced eight different symptoms, such as "how often were you bothered by feeling down, depressed, or hopeless?" participants were asked to report on a 4-point scale (0 = "not at all" [ . . . ] 3 = "nearly every day"). we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the phq-8. the mfq [26] is a depression screening tool for individuals aged 6 to 17 years old. it is a validated measure of depression in children and young people [27] . individuals were presented with 13 questions, such as "i felt miserable or unhappy" in the past two weeks. responses were scored on a 3-point scale (0 = "not true", 1 = "somewhat true", 2 = "true"). we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the mfq. adults: domain-specific risk-taking (dospert) scale. participants completed the health and social risk subscales of the 30 item dospert scale, a validated risk-taking measure for adults [22] . individuals were asked to report on a 5-point scale their likelihood of engaging in each activity or behaviour such as "speaking your mind about an unpopular issue in a meeting at work" ("1= "very unlikely" to 5= "very likely") and their assessment of how risky each situation or behaviour was ("1= "not at all risky" to 5= "extremely risky"). we hypothesised that higher scores on the social subscale of the hsrq would be negatively associated with the social risk engagement subscale of the dopsert and positively associated with the social risk perception subscale of the dospert, with the same being true for the health risk subscales. adolescents. note that adolescents did not complete a social risk-taking measure because the items from the dospert are not appropriate for this age group (e.g., "approaching your boss to ask for a raise") and there is no existing social risk-taking measure for adolescents. all data was analysed primarily using the laavan (version 0.6-5), psych (version 1.9.12.3) and semtools (version 0.5-2) packages in r (version 3.62; r core team, 2013). we first conducted an exploratory factor analysis (efa) using oblique (oblimin) rotation on the initial 16 items relating to health and social risks (eight health, eight social) on a sample of 500 adults. we determined the suitability of our sample size and data for efa based on the kaiser-meyer-olkin (kmo) index (>0.70) and bartlett's test (<0.05) [28] . we determined the number of factors to retain based on examination of the scree plot, retention of factors with eigenvalues of 1 or greater and factors with at least three items. items with factor loadings of <0.4 were removed. following factor and item reduction based on the above criteria, we subjected the same data to a confirmatory factor analysis (cfa) to assess the strength of the proposed factor structure. we then used cfa to assess the strength of this factor structure in two new samples: one adult group (aged 18-77; n = 415) and one adolescent group (aged 11-17, n = 485). in line with the recommendations outlined by [29] , our primary measure of model fit was root mean squared error of approximation (rmsea). an rmsea of around <0.08 indicates reasonable fit [29] . we also assessed the model fit with the standardised root mean square residual (srmr; <0.08 reasonable fit), comparative fit index (cfi; >0.9 reasonable fit), and the tucker-lewis index (tli; >0.9 reasonable fit). we computed measures of internal consistency using cronbach's alpha and mcdonalds omega. we further tested the fit of each two-factor cfa using aic, by comparing a one-factor solution (where all items are loaded onto one higher order risk factor) with the two-factor solution (health and social risk). a lower aic represents a better fit to the data. to assess convergent and divergent validity, we assessed the relationship between the new hsrq, rejection sensitivity [23, 24] and depressed mood [25, 27] across both cfa samples using pearson r correlations. we then compared the strength of the relationship between the adolescent and adult sample with a z statistic. one additional risk-taking questionnaire, the dospert [22] , was used to brain sci. 2020, 10, 397 6 of 14 relate the hsrq to risk perception and engagement health and social risks, in the adult sample only. in order to establish the test-retest reliability of the hsrq, we invited 100 participants from the adult cfa sample to complete the questionnaire a second time 11-12 days after the first completion. we used pearson r correlations to establish the relationship between these individuals' scores at time point 1 and 2. using all the data collected (n = 1399), we computed a mean score of the validated health and social subscale. we determined the relationship between age and the two subscales of the hsrq using multiple linear regression. we included age, gender and risk domain (health, social) in the model, as well as an age*risk domain interaction, to predict risk concern. we used aic to compare between linear, quadratic and cubic models, with a lower aic representing a better fit. analyses showed that the sample size (n = 500) was suitable for conducting factor analysis (kmo = 0.88, bartlett's test <0.001). factor loadings of each item are presented in table 1 . three factors showed eigenvalues above our threshold of 1: 5.92, 2.53, 1.14, respectively. a fourth factor with an eigenvalue of 0.88 was removed. the third factor (eigenvalue 1.14) only consisted of two items and so was removed. this resulted in a two-factor, 11-item solution. the two factors contained items pertaining to health risks (5 items) and social risks (6 items). we tested the strength of this two-factor solution on the same sample with cfa. the two-factor solution fit the data well (rmsea = 0.07 (0.06-0.08), srmr = 0.05, cfi = 0.95, and tli = 0.93). we conducted a cfa on a new sample of 415 adults. the sample size was deemed appropriate for testing a model comprising of 24 parameters (11 factor loadings, 11 error variances and 2 factor correlations). the model approximates to a 17:1 subject to parameter ratio, above the recommended 10:1 [30] . the two-factor structure adequately fit the data according to our primary fit index; rmsea = 0.08 (0.07-0.09). other model fit indices were good (srmr = 0.06) or fell just below the suggested cut off (cfi = 0.87 and tli = 0.83). factor loadings of each item (see table 2 ) were medium to high (0.42-0.76) except for one item (loading of 0.28). although this item loading was low, we decided to retain it in order to maintain consistency with the factor structure in the adolescent sample and given its good loading in the adult efa and the adolescent cfa sample. there was a positive correlation between the health and social subscale of the hsrq (r(482) = 0.21, p < 0.001). measures of internal consistency were good (see table 3 ). an additional cfa to assess a one-factor structure did not achieve good model fit (rmsea = 0.12 (0.11-0.13), srmr = 0.10, cfi = 0.72, and tli = 0.70), indicating that concern about risk taking is not a unitary construct and is instead domain specific (health, social). the aic of the two-factor model (42983.13) was lower than the aic of the one-factor model (43126.50), suggesting that the two-factor model provided a better fit. to measure the test-retest reliability of the hsrq, 100 adult participants were invited to complete the questionnaire a second time, 11-12 days later; 68 participants responded. pearson r correlation between the two time points indicated good test-retest reliability (social risk subscale: r(66) = 0.62, p < 0.001; health subscale: r(66) = 0.74, p < 0.001). to assess convergent and divergent validity, participants also completed measures of rejection sensitivity (a-rsq), depressed mood (phq-8) and risk taking (dospert). association with rejection sensitivity. the social risk subscale positively correlated with rejection sensitivity (r(413) = 0.22, p < 0.001) such that individuals who scored high on concern for social risk also scored high in rejection sensitivity (see figure 1, panel b) . the health risk subscale did not significantly correlate with rejection sensitivity (r(413) = −0.00, p = 0.99). association with depressed mood. the social risk subscale positively correlated with depressed mood (r (413) = 0.13, p = 0.009) such that individuals who scored high on concern for social risk also scored high in depressed mood (see figure 1 , panel d). the health risk subscale did not significantly correlate with depressed mood (r(413) = −0.05, p = 0.27). association with risk taking. the social risk subscale of the hsrq negatively correlated with the likelihood of engaging in social risks subscale of the dospert (r(413) = −0.32, p < 0.001) and was positively correlated with the perception of social risks subscale of the dospert (r(413) = 0.29, p < 0.001). in other words, individuals who scored high on concern for social risk on the hsrq were less likely to engage in social risk behaviours and more likely to rate social risk behaviours as risky. the health risk subscale of the hsrq was negatively correlated with the likelihood of engaging in health risks subscale of the dospert (r(413) = −0.18, p < 0.001) and was positively correlated with the perception of health risks subscale of the dospert (r(413) = −0.29, p < 0.001). thus, individuals who scored high on concern for health risks were less likely to engage in health risk behaviours and more likely to rate health risk behaviours as risky. we conducted a cfa on a new sample of 484 adolescents. the sample size was deemed appropriate for testing a model comprising of 24 parameters (11 factor loadings, 11 error variances and 2 factor correlations). the model approximates to a 20:1 subject to parameter ratio, above the recommended 10:1 [30] . the two-factor structure fit the data well (rmsea = 0.07 (0.06-0.08), srmr = 0.05, cfi = 0.95, and tli = 0.93). factor loadings of each item were medium to high (0.54-0.79) (see table 2 ). there was a positive correlation between the health and social subscale of the hsrq (r(482) = 0.21, p < 0.001). measures of internal consistency were good (see table 3 ). an additional cfa to assess a one-factor structure did not achieve good model fit (rmsea = 0.18 (0.17-0.19), srmr = 0.16, cfi = 0.60, and tli = 0.50), indicating that concern about risk taking is not a unitary construct across domains, and is instead domain specific (health, social), as in the adult sample. the aic of the two-factor model (49696.51) was lower than the aic of the one-factor model (50280.89), suggesting that the two-factor model provides a better fit. validation to assess convergent and divergent validity, a subset of the adolescent participants completed measures of rejection sensitivity (c-rsq; n = 207) and depressed mood (mfq; n = 281). association with rejection sensitivity. the social risk subscale positively correlated with rejection sensitivity (r(205) = 0.52, p< 0.001) such that individuals who scored high on concern for social risk also scored high in rejection sensitivity (see figure 1 , panel a). the health risk subscale did not significantly correlate with rejection sensitivity (r(205) = −0.01, p = 0.83). association with depressed mood. the social risk subscale positively correlated with depressed mood (r(279) = 0.31, p < 0.001) such that individuals who scored high on concern for social risk also scored high in depressed mood (see figure 1 , panel c). the health risk subscale did not significantly correlate with depressed mood (r(279) = −0.11, p = 0.06). we compared the strength of the correlations between concern for social risk, rejection sensitivity and depression between the adolescent cfa and adult cfa sample. the strength of the correlations between concern for social risk and rejection sensitivity and depression was stronger for adolescents than for adults (rejection sensitivity: z = 4.12, p < 0.001; depression: z = 2.45, p = 0.007). we conducted a multiple regression to assess the relationship between the hsrq and age, using data collected across all participants (n = 1399; aged 11-77). the outcome was risk concern (i.e., the mean score of the health and social subscales) and the predictor variables were age, gender, risk domain (health, social), and an age by risk domain interaction. the overall regression model was significant (r 2 = 0.14, f(3,2793) = 113.2, p < 0.001; see table 4 for estimates). there was a significant main effect of age (β = −0.15; 95% ci: −0.23-0.07; p < 0.001) and risk domain (β = −11.69; 95% ci: −15. 18-8.19 ; p < 0.001) and a significant interaction between age and risk domain (β = −0.16; 95% ci: −0.27-0.04; p < 0.001). there was no main effect of gender (β = 1.07 95% ci: −0.54-2.69; p < 0.19). . the strength of the correlations between concern for social risk and rejection sensitivity and depression was stronger for adolescents than for adults (rejection sensitivity: z = 4.12, p < 0.001; depression: z = 2.45, p = 0.007). we conducted a multiple regression to assess the relationship between the hsrq and age, using data collected across all participants (n = 1399; aged 11-77). the outcome was risk concern (i.e., the mean score of the health and social subscales) and the predictor variables were age, gender, risk domain (health, social), and an age by risk domain interaction. the overall regression model was significant (r 2 = 0.14, f(3,2793) = 113.2, p < 0.001; see table 4 for estimates). there was a significant main effect of age (β = −0.15; 95% ci: −0.23-0.07; p < 0.001) and risk domain (β = −11.69; 95% ci: −15. 18-8.19 ; p < 0.001) and a significant interaction between age and risk domain (β = −0.16; 95% ci: −0.27-0.04; p < 0.001). there was no main effect of gender (β = 1.07 95% ci: −0.54-2.69; p < 0.19). to explore the interaction between age and risk domain, we plotted the relationship ( figure 2 ) and used simple slope analyses. the slope for both risks was significant (social: β = −0.31, p < 0.001); health: β = −0.15, p < 0.001). there was a significant difference between the gradient of these slopes (t(2794) = 2.7, p = 0.008), driven by a steeper decline across age in concern for social risk compared to concern for health risk. this linear model (aic: 25125.34) outperformed a quadratic model (aic: 25142.67) and cubic model (aic: 25156.83). relationship between concern for social risk and rejection sensitivity for adolescents (r(205) = 0.52, p < 0.001; panel (a) and adults (r(413) = 0.22, p < 0.001; panel (b). relationship between risk concern and depression for adolescents (r(279) = 0.31, p < 0.001; panel (c) and adults (r(413) = 0.13, p = 0.009; panel (d). the strength of the correlations between concern for social risk and rejection sensitivity and depression was stronger for adolescents than for adults (rejection sensitivity: z = 4.12, p < 0.001; depression: z = 2.45, p = 0.007). to explore the interaction between age and risk domain, we plotted the relationship ( figure 2 ) and used simple slope analyses. the slope for both risks was significant (social: β = −0.31, p < 0.001); health: β = −0.15, p < 0.001). there was a significant difference between the gradient of these slopes (t(2794) = 2.7, p = 0.008), driven by a steeper decline across age in concern for social risk compared to concern for health risk. this linear model (aic: 25125.34) outperformed a quadratic model (aic: 25142.67) and cubic model (aic: 25156.83). −0.016 0.06 −2.7 0.008 note: *= an interaction term; β= beta coefficient; se= standard error; t = t statistic (the β divided by the se); p = significance. relationship between age and concern for health risk (slope: β = −0.15, p < 0.001) and social risk (slope: β = −0.31, p < 0.001). there was a significant difference between the gradient of these slopes (t(2794) = 2.7, p = 0.008), driven by a steeper decline across age in concern for social risk than for concern for health risk. in this study, we developed a questionnaire measure of concern for health and social risk behaviours for use in adolescents and adults. our results showed that concerns related to engaging in social risks are distinct from concerns related to engaging in health risks. overall, we found that people reported greater concern for health risk compared with social risk. we investigated age differences in concern for health and social risk, and found that concern for both health and social risk decreased with age, from adolescence through adulthood. however, concern for social risk decreased to a greater extent than concern for health risk. this suggests that, relative to adults, adolescents are more concerned about social risks than health risks. this heightened concern for social risk in adolescence has implications for understanding why adolescents engage in health and legal risks. one hypothesis is that adolescents are motivated to avoid what they consider to be a greater immediate risk, the social risk of being rejected or excluded by their peers [13] . avoiding social risks can be considered an important goal during adolescence, a period when social status and friendships provide psychological and physical health benefits [14, 15] . the association between our new measure, the health and social risk questionnaire (hsrq), rejection sensitivity and depression indicate the potential relevance of social risk for understanding adolescent behaviour and mental health. individuals who report greater concern for social risk were relationship between age and concern for health risk (slope: β = −0.15, p < 0.001) and social risk (slope: β = −0.31, p < 0.001). there was a significant difference between the gradient of these slopes (t(2794) = 2.7, p = 0.008), driven by a steeper decline across age in concern for social risk than for concern for health risk. in this study, we developed a questionnaire measure of concern for health and social risk behaviours for use in adolescents and adults. our results showed that concerns related to engaging in social risks are distinct from concerns related to engaging in health risks. overall, we found that people reported greater concern for health risk compared with social risk. we investigated age differences in concern for health and social risk, and found that concern for both health and social risk decreased with age, from adolescence through adulthood. however, concern for social risk decreased to a greater extent than concern for health risk. this suggests that, relative to adults, adolescents are more concerned about social risks than health risks. this heightened concern for social risk in adolescence has implications for understanding why adolescents engage in health and legal risks. one hypothesis is that adolescents are motivated to avoid what they consider to be a greater immediate risk, the social risk of being rejected or excluded by their peers [13] . avoiding social risks can be considered an important goal during adolescence, a period when social status and friendships provide psychological and physical health benefits [14, 15] . the association between our new measure, the health and social risk questionnaire (hsrq), rejection sensitivity and depression indicate the potential relevance of social risk for understanding adolescent behaviour and mental health. individuals who report greater concern for social risk were more likely to report greater sensitivity to rejection (adolescents: c-rsq; adults: a-rsq). social rejection is an unpleasant feeling and therefore it makes sense that individuals with a heightened degree of sensitivity to the negative effects of social rejection would be more concerned with engaging in situations that could lead to, or indicate a possibility of, social rejection. within the adult sample, individuals who scored high on concern for social risk were less likely to engage in socially risky behaviours and were more likely to rate social risk behaviours as risky. this finding indicates that higher concern for social risk is related to an increase in rejection sensitivity and an increase in socially risk-averse behaviour. concern for social risk was also related to depressive symptomatology (adolescents: mfq; adults: phq-8), such that individuals with greater concern for social risk were more likely to report higher levels of depressive symptoms. this finding supports the predictions made by the social risk hypothesis of depression [20] . this hypothesis proposes that, when cues in the environment signal that one's social burden is significantly greater than their social value, depression manifests as an adaptive mechanism to remove the individual from social situations which might confer further risk of social rejection. we showed that concern for social risk was more strongly associated with rejection sensitivity in adolescents (11-17 years) , than in adults (18+ years). during adolescence, individuals are particularly sensitive to social evaluative concerns [11] , and peer perceptions influence adolescents' social and personal worth [31] . adolescents are also hypersensitivity to social rejection relative to adults [10] . this fits with our finding that concerns for social risk are more tightly linked to rejection sensitivity among adolescents, relative to adults. in addition, and as previously discussed, adolescents with good quality friendships and higher social status have more favourable psychological and physical outcomes later in life. thus, it is potentially beneficial and adaptive for adolescents to try to avoid the risk of social rejection [14, 15] . additionally, the association between concern for social risk and depressive symptoms was stronger in adolescents than adults. this suggests that the social environment may be particularly salient for mental health during this developmental period [18, 32] . this is important because the incidence of many mental health problems, including depression, increases significantly during adolescence [33] . our findings have a number of implications. at the theoretical level, the way in which risk behaviours have been traditionally conceptualised has focused heavily on the health, financial, legal and recreational domains. our results suggest that social risk should be incorporated into our understanding of risk-taking behaviour. for some individuals, taking a social risk, and placing themselves at risk of social rejection, is a real and 'risky' decision. at the practical level, interventions aimed at reducing health and legal risk behaviours should recognise the importance of concerns surrounding social risks. one promising approach is to focus on peer-led interventions, which work to influence social norms surrounding unhealthy or illegal behaviours [34] . this approach encourages healthy behaviours by reducing the social risk of being ostracised by peers. interventions using a peer-led approach have shown positive results for unhealthy behaviours such as bullying [35] and smoking [36] . the hsrq is a valid measure for individuals aged 11+. however, this measure has not been validated for children below 11 and very little is known about social risk in this younger age group. future work should explore the extent to which the current items and factor structure are valid for use in children below this age. additionally, we did not test the relationship between our measure of concern for social risk and engagement in real-life social risks in the adolescent sample (11-17 years) because of a lack of appropriately validated scales for this age group. this is a limitation when making comparisons with the adult sample (18+) and future work should explore the relationship between our concern for social risk measure and engagement in real-life social risks among adolescents. further, our sample was collected from the united kingdom and therefore this measure should be cross-culturally validated for use in other socio-cultural environments. in addition, the hsrq is based on self-report, and an important line of subsequent work is to relate responses on this questionnaire measure to a task-based assessment of social risk. finally, the present study was not designed to investigate the degree to which individuals weigh up the health vs. social consequences of a given 'risky' decision. therefore, an important outstanding question is the degree to which individual variation in concern for health and social risk impacts involvement in 'risky' behaviours, especially when individuals are presented with risks that often carry both social and health consequences, such as smoking or dangerous driving. in the current study, we developed a self-report measure of concern for health and social risk for use with adolescents and adults. we found that heightened concern for social risk was related to increased sensitivity to rejection and depression, with this relationship being stronger for adolescents compared to adults. this supports the body of evidence that adolescence is a period of heightened sensitivity to the social environment. in addition, both concern for health and social risk decreased with age, but the rate of decrease was steeper for social versus health risk, suggesting that adolescence is a period of amplified concern for social risk. practically, these findings have potential implications for policy. within an educational context, an understanding of social risk may offer insight into why adolescents are more or less motivated to engage with school work. for example, if individuals who try hard at school are perceived as unpopular or uncool, then being openly motivated in the classroom could be a social risk [37] . within a legal context, concerns surrounding social risk may be a factor in adolescents' decisions to engage in criminal behaviour, particularly in peer contexts when opting out of a group behaviour could risk being excluded from the group. together, these findings highlight the importance of social risk in adolescent behaviour and suggest that interventions to reduce risk-taking behaviours in this age group should consider the role of social risk. the following are available online at http://www.mdpi.com/2076-3425/10/6/397/s1, table s1 : health and social risk questionnaire (hsrq). is adolescence a sensitive period for sociocultural processing? the relationship between early age of onset of initial substance use and engaging in multiple health risk behaviors among young adolescents clustering of health-compromising behavior and delinquency in adolescents and adults in the dutch population carrying passengers as a risk factor for crashes fatal to 16-and 17-year-old drivers peer influence on risk taking, risk preference, and risky decision making in adolescence and adulthood: an experimental study is it all in the reward? peers influence risk-taking behaviour in young adulthood neural correlates of expected risks and returns in risky choice across development age-related differences in social influence on risk perception depend on the direction of influence social influence on risk perception during adolescence social brain development and the affective consequences of ostracism in adolescence the teenage brain: sensitivity to social evaluation risk-taking and social exclusion in adolescence: neural mechanisms underlying peer influences on decision-making avoiding social risk in adolescence peer status in school and adult disease risk: a 30-year follow-up study of disease-specific morbidity in a stockholm cohort goodyer, i. the nspn consortium adolescent friendships predict later resilient functioning across psychosocial domains in a healthy community cohort peripheral ingroup membership status and public negativity toward outgroups how intragroup dynamics affect behavior in intergroup conflict: the role of group norms, prototypicality, and need to belong the role of peer rejection in adolescent depression peer relationships in adolescence the social risk hypothesis of depressed mood: evolutionary, psychosocial, and neurobiological perspectives darwinian models of depression: a review of evolutionary accounts of mood and mood disorders a domain-specific risk-taking (dospert)scale for adult populations rejection sensitivity and disruption of attention by social threat cues rejection sensitivity and children's interpersonal difficulties the phq-8 as a measure of current depression in the general population development of a short questionnaire for use in epidemiological studies of depression in children and adolescents: factor composition and structure across development criterion validity of the mood and feelings questionnaire for depressive episodes in clinic and non-clinic subjects statistical methods for health care research conceptions and perceived influence of peer groups: interviews with preadolescents and adolescents social support and mental health in late adolescence are correlated for genetic, as well as environmental lifetime prevalence and age-of-onset distributions of mental disorders in the world health organization's world mental health survey initiative peer influence in adolescence: public-health implications for covid-19 changing climates of conflict: a social network experiment in 56 schools an informal school-based peer-led intervention for smoking prevention in adolescence (assist): a cluster randomised trial role theory of schools and adolescent health this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license the authors declare no conflicts of interest. key: cord-017883-6a4fkd5v authors: dutta, ankhi; flores, ricardo title: infection prevention in pediatric oncology and hematopoietic stem cell transplant recipients date: 2018-07-16 journal: healthcare-associated infections in children doi: 10.1007/978-3-319-98122-2_16 sha: doc_id: 17883 cord_uid: 6a4fkd5v pediatric patients with malignancies and transplant recipients are at high risk of infection-related morbidity and mortality. children at the highest risk for infections are those with acute myeloid leukemia (aml), relapsed acute lymphoblastic leukemia (all), and hematopoietic stem cell transplant recipients (hsct). these patients are at high risk for life-threatening bacterial, viral, and fungal infections which are associated with prolonged hospital stay, poor quality of life, and increased healthcare cost and death. recognition of risk factors which predisposes them to infections, early identification of signs and symptoms of infections, prompt diagnosis, and empiric/definitive treatment are the mainstay in reducing infection-related morbidity and mortality. infection control and prevention programs also play a crucial role in preventing hospital-acquired infections in these immunosuppressed hosts. there are various factors which contribute to the increased susceptibility to infections in pediatric hematology/oncology (pho) and hsct patients, most prominent of them being disruption of cutaneous and mucosal barriers (oral, gastrointestinal, etc.), microbial gastrointestinal translocation, defects in cell-mediated immunity, and insufficient quantities and inadequate function of phagocytes. goals of infection control and prevention in this population are based on mitigating the risk inherent with the underlying malignancy and associated treatments (i.e., chemotherapy, radiation). this chapter discusses infection control and prevention measures specifically in patients with hematological malignancies as well as hsct recipients. hand hygiene and standard precautions during the care of pho and hsct patients are key components in reducing the risk of infections. additional isolation precautions may also be undertaken depending on the pathogen isolated and/or symptoms that the patient is experiencing (e.g., contact precautions would be appropriate in patients experiencing diarrhea). further information on general infection prevention measures can be found in chap. 1. minimizing injury to mucosal surfaces and decreasing heavy colonization of the skin reduce the likelihood of microbial invasion through these sites. thus, the importance of meticulous skin care and daily inspection in pho and hsct patients is paramount and provides opportunities to identify areas of inflammation or breakdown early. skin inspection should be done routinely, with special attention to highrisk areas like intravascular catheter insertion sites and the perineum. rectal thermometers, digital rectal examinations, and suppositories should be avoided to prevent mucosal breakdown. as part of an effort to reduce colonization of cutaneous surfaces, daily chlorhexidine baths have been shown to reduce hais and transmission of multidrug-resistant organisms (mdro) in oncology patients [1, 2] . chlorhexidine gluconate (chg) is a cationic bisbiguanide that serves as a topical antiseptic. chg binds to negatively charged bacterial cell wall proteins altering the bacterial cell wall equilibrium and helps in reducing bacterial colonization of the skin [1] . education of patients, families, and staff on the importance of these practices is key to compliance with this preventative strategy and should be made a priority. many experts recommend a complete periodontal examination be performed prior to initiation of chemotherapy with reevaluations throughout the treatment course and after completion [3, 4] . oral mucositis, which can be considered an acute inflammation and/or ulceration of the oral/oropharyngeal mucus membranes, is a common adverse effect of chemotherapeutic agents. it can cause oral pain/discomfort as well as difficulties in eating, swallowing, and speech. mucositis is most commonly caused by chemotherapeutic agents which prevent dna synthesis such as methotrexate, 5-fluorouracil, and cytarabine, particularly in hsct recipients. oral rinses with normal saline or chg-containing products are recommended 4-6 times per day to prevent oral mucositis [2, 3] . patients with painful mucositis might not comply with oral care regimens, however, putting them at increased risk for infections from oral flora such as bacteremia due to viridans streptococci. mouth rinses containing alcohol should be avoided because they can aggravate mucositis. neutropenic patients should also be instructed to brush their teeth carefully in order to prevent gingival injury [3] . a regular soft toothbrush or an electric brush can be used to minimize trauma [3] . any elective dental procedure should be ideally performed prior to starting chemotherapy and after discussion with the primary medical team. the absolute neutrophil count, platelet count, and stage of treatment should be considered before performing any dental procedures in this vulnerable population [2, 3] . the presence of central venous catheters (cvc) in this population puts them at risk for central line-associated bloodstream infection (clabsi) and its related complications. clabsi is the most commonly reported hai in most pediatric series. among all the pediatric hai reported to national healthcare surveillance network (nhsn), 15% were from oncology units; streptococcus viridans (15%) and klebsiella pneumoniae/oxytoca (12%) were the two most common pathogens in this study [5] . in the nhsn report, antibiotic resistance was noted to be high in oncology units, including ampicillin and/or vancomycin resistance for enterococcus faecium and fluoroquinolone resistance for escherichia coli [5] . although less than 4% of enterobacteriaceae were reported to have carbapenem resistance, the emergence of such organisms in this population is of significant concern [6]. among candida infections in this population, fluconazole resistance among non-c. albicans and non-c. parapsilosis isolates was up to 41%, whereas fluconazole resistance in c. albicans and c. parapsilosis was <4% [5] . mucosal barrier injury (mbi)-associated laboratory-confirmed bloodstream infections (mbi-lcbi) have gained attention in recent years [7, 8] . these are clabsis related primarily to mucosal barrier injury (i.e., mucositis) and not due to the direct presence of the cvc per se. in the nhsn definition, a positive blood culture would qualify as a mbi-lcbi if it results from one or more groups of selected commensal organisms of the oral cavity or gastrointestinal tract and occurred in the presence of signs and symptoms consistent with mucosal barrier injury (mbi) in pho or hsct patients [7] . eligible organisms for mbi-lcbi include candida species, enterococcus, enterobacteriaceae, viridans group streptococci, other streptococcus species, and anaerobes [7] . specific guidelines for central line insertion and maintenance bundles have been proposed by the centers for disease control and prevention (cdc) and the infectious diseases society of america (idsa) to reduce the clabsi rates and healthcare costs [9, 10] . several studies have demonstrated that a multifaceted approach reduces clabsi rates in this population [11, 12] and includes standardizing cvc insertion practices and maintenance bundles, tracking cvc infections using standardized definitions, and using dedicated nursing staff or "cvc champions" specifically trained in cvc maintenance and tracking in conjunction with other infection control methods (including oral and hand hygiene, optimizing nurse/patient ratio, etc.). clabsi is discussed in greater detail in chap. 6. the american society for blood and marrow transplantation recommends a low microbial diet for hsct recipients [13] . there is little evidence, however, to suggest that this helps in pho patients. routine safety in handling and preparing food should be practiced by patients and parents. in general, eating unpasteurized milk/ cheese, undercooked meat, and raw fruits and vegetables is discouraged during periods of neutropenia to reduce incidence of infection. the need to minimize risk of infection, however, should be balanced with the nutritional needs and quality of life of the patient [2, 13] . pets can be a great source of companionship and comfort to children; however, there are several diseases that can be transmitted by pets to these immunosuppressed hosts [14] [15] [16] . certain animals like reptiles, birds, rodents, or other exotic animals that cannot be immunized and could carry unusual human pathogens should not be kept as pets in households with pho or hsct patients. immunosuppressed patients should avoid petting zoos due to the risk of diseases secondary to enteric pathogens (such as salmonella or campylobacter) [13] [14] [15] [16] . dogs and cats, preferably more than 1 year old, are generally considered safe for pho and hsct patients. they should be routinely evaluated by veterinarians for diseases and their immunizations kept up-to-date. extreme care should be taken to maintain hand hygiene during and after handling the pets [13] [14] [15] [16] . further information regarding pet therapy is available in chap. 4. studies performed in adult oncology patients have consistently shown the benefit of using prophylactic antibiotics in reducing the incidence of bacterial infections [17] . levofloxacin prophylaxis in adults has been shown to reduce the incidence of fever, bacterial infection, hospitalization rates, and all-cause mortality [18, 19] . based upon such data in adults, the idsa guidelines for the use of antimicrobial agents in neutropenic patients with cancer state that fluoroquinolone prophylaxis should be considered for high-risk patients with prolonged severe neutropenia [20] . pediatric studies on antibiotic prophylaxis are limited. a pediatric pilot study on the use of ciprofloxacin prophylaxis for pediatric patients receiving delayed intensification therapy for acute lymphoblastic leukemia (all) showed a reduction in hospitalization, intensive care admission, and bacteremia when compared to controls [21] . in another study, levofloxacin prophylaxis in patients with all reduced the odds of febrile neutropenia, possible bacterial infection, and confirmed bloodstream infection by ≥70%. it also reduced the use of other broad-spectrum antibiotics and the incidence of c. difficile infections [22] . in other studies, however, ciprofloxacin prophylaxis did not decrease the incidence of overall bacteremia or duration of fever or mortality in pediatric acute myelogenous leukemia (aml) patients [23] . furthermore, increasing quinolone resistance among gram-negative organisms is a concern recently observed in the nhsn database of pediatric oncology patients with clabsi [5] . in addition, the use of antimicrobial prophylaxis in pho could increase the possibility of developing other mdros, invasive fungal infections, or drug-related toxicities. though some authors suggest that antibiotic prophylaxis should be considered in children undergoing induction chemotherapy for all, there is currently insufficient data to inform definitive guidelines for antibiotic prophylaxis to prevent bacterial infections in pediatric oncology patients [19] [20] [21] . notably, an open-label randomized clinical trial was recently conducted of levofloxacin prophylaxis vs. no prophylaxis in children with aml, relapsed all, and hsct recipients. among patients with aml and relapsed all, prophylaxis was associated with a reduction in rates of bacteremia; there was a numeric reduction in bacteremia in the hsct recipients, but this did not achieve statistical significance. it is unclear at this time how these new findings will influence practice and future guidelines [24] . infections with common respiratory and gastrointestinal viruses can result in significant morbidity and mortality in pho and hsct patients. the most common respiratory viruses encountered include rhinovirus, coronavirus, adenovirus, rsv, parainfluenza, human metapneumovirus, and influenza. common gastrointestinal viruses affecting both healthy and immunocompromised children include norovirus, rotavirus, enteric adenoviruses, and enteroviruses among others. infection prevention strategies should include education provided to the patient and the family about hand hygiene, prevention techniques, avoidance of ill visitors, disease surveillance in the community and hospital, vaccination against influenza and prompt identification, and testing and treatment (if possible) of any respiratory viral illness. implementation of routine infection control prevention policies on oncology wards should reduce transmission of common respiratory and gastrointestinal viruses. all visitors should be screened for any signs and symptoms of acute viral illness and restricted from visitation on the unit or contact with any immunocompromised hosts. chapter 4 outlines infection control guidance for hospital visitors in greater detail. immunization of healthcare workers and household contacts needs special consideration in settings with pho and hsct patients. given the immunosuppressed status of children with malignancy and/or hsct, immunization of those closest to them at home and those caring for them in the hospital is critically important in preventing infections. live attenuated vaccines contain a theoretical risk of being transmitted to an immunocompromised host. live oral polio vaccine, which is no longer administered in the united states, is an absolute contraindication for people taking care of this high-risk population. however, data suggests that measles, mumps, and rubella (mmr), varicella zoster, and herpes zoster vaccines can be safely provided to healthcare workers and household contacts [25] . if healthcare personnel develop a rash that cannot be covered within the first 42 days following receipt of the varicella vaccine, they should avoid any contact with immunocompromised patients until all rash has crusted to avoid the potential risk of transmitting vaccine strain varicella to patients [25] . infants living in households with persons who are immunocompromised including pho and hsct patients may be safely immunized against rotavirus; it is recommended, however, that immunocompromised persons avoid contact with the infant's diapers/stool for 4 weeks following vaccination to minimize risk of acquiring vaccine strain rotavirus infection [26] . an inactivated influenza vaccine is preferred for personnel taking care of immunocompromised children as opposed to live attenuated influenza vaccine [25] . vaccination against other non-viral pathogens (such as pneumococcus or pertussis) by family members is another important method to minimize the risk of serious infection in pho patients. hospital environments are designed to minimize the potential for fungal disease in the highest-risk patients. high efficiency particulate air (hepa) filters have been shown to reduce nosocomial infection in hsct patients, and the cdc recommends hepa filters in hsct recipient's rooms. the rooms should also have directed airflow and positive air pressure and be properly ventilated (≥12 air changes per hour) [2] . avoidance of carpets and upholstery is also recommended. since outbreaks secondary to aspergillus have been reported during hospital renovation or construction, appropriate containment should be in place, and strict precautions should be taken to prevent exposure to patients during such periods [2] . infection control and prevention departments should be involved in risk assessment, planning, and approval of all construction or renovation projects in healthcare facilities including inpatient units, clinics, and infusion centers caring for these patients [27] . cytotoxic chemotherapies and radiation therapy used in the treatment of malignancies are myelosuppressive and result in variable duration and severity of neutropenia. in addition, certain malignancies that originate from bone marrow precursors (i.e., leukemia) or metastasize to the bone marrow (e.g., lymphoma, neuroblastoma, and sarcomas) can result in a decreased number of normal blood cell precursors and consequent neutropenia. hence, pediatric cancer and hsct patients are frequently immunosuppressed and at risk for a wide range of pathogens. febrile neutropenia is a common condition in the pho/hsct population. with regard to this entity, fever is defined as a single temperature >38.3 °c (101 °f) or a temperature ≥38.0 °c (100.4 °f) on two occasions 1 hour apart. neutropenia is classified as mild (absolute neutrophil count [anc] >500-1000/mm 3 ), moderate (anc ≥200-500/mm 3 ), or severe (anc <200/mm 3 ). febrile neutropenia (also known as fever and neutropenia) is the combination of these two events in the patient with malignancy or hsct and is a common complication of cancer treatment. it has been estimated that 10-50% of patients with solid tumors and up to 80% of patients with hematologic malignancies will develop fever during at least one chemotherapy cycle associated with neutropenia [28] . moreover, fever may be the only indication of a severe underlying infection as other signs and symptoms are often absent or minimized due to an inadequate inflammatory response. therefore, physicians must be particularly aware of the infection risks, diagnostic methods, and antimicrobial therapies required for the management of febrile neutropenia in cancer patients. in the majority of febrile episodes, a pathogen is not identified, with a clinically documented infection occurring in only 20-47% of cases. of these patients, bacteremia occurs in 10-25%, with most episodes seen in the setting of prolonged and/or profound neutropenia (anc < 100 neutrophils/mm 3 ) [29, 30] . on the other hand, the most common sites of focal infection include the gastrointestinal tract, lung, and skin [31] . over the past five decades, the rates, antibiotic resistance, and epidemiologic spectrum of bloodstream pathogens isolated from febrile neutropenic patients have changed substantially under the selective pressure of broad-spectrum antimicrobial therapy and/or prophylaxis [32, 33] . early in the development of cytotoxic chemotherapies, during the 1960s and 1970s, gram-negative pathogens predominated in febrile neutropenia. subsequently, during the 1980s and 1990s, gram-positive organisms became more common as use of indwelling plastic venous catheters became more prevalent, which can allow for colonization and subsequent infection by gram-positive skin flora [31, 34] . gram-positive bacteria currently account for 60-70% of culture-positive infections in pediatric cancer patients [5] . importantly, a recent systematic review of the epidemiology and antibiotic resistance of pathogens causing bacteremia in cancer patients since 2008 showed a recent shift from gram-positive to gram-negative organisms [35] . the main causes for this new trend are to be determined, but the use and duration of antibiotic prophylaxis are an important factor to consider as the incidence of gram-negative bacteria was significantly higher in groups who did not receive antibiotic prophylaxis. the use of antibiotic prophylaxis, however, may conceivably select for resistant organisms; increasing rates of antibiotic resistance in both gram-negative and gram-positive bacteria have been reported in the global community as well as the cancer population and are of significant concern [5, 31, 35] . overall, the most common blood isolate in the setting of febrile neutropenia is coagulase-negative staphylococci. other less common blood isolates include enterobacteriaceae, non-fermenting gram-negative bacteria (such as pseudomonas), s. aureus, and streptococci (see table 16 .1). providers should review the local data at their institution for prevalent blood isolates and antimicrobial susceptibility profiles. management of febrile neutropenia continues to evolve given the awareness that interventions previously considered standard of care (such as inpatient treatment with intravenous broad-spectrum antibiotics) may not be necessary nor appropriate for all patients [36] . it has become increasingly important to identify patients at high risk of infectious complications requiring more aggressive management and monitoring (i.e., inpatient setting with intravenous antibiotics). in addition, clinicians may be able to identify low-risk patient populations who may be managed in a less aggressive and more cost-effective manner (i.e., outpatient setting and/or with oral antibiotics). in order to address these issues, algorithmic approaches to neutropenic fever, infection prophylaxis, diagnosis, and treatment have been developed [20, [37] [38] [39] . it is well established that stratification of patients to determine the risk for complications of severe infection should be undertaken at presentation of fever [20, 37] . this determines the type of empiric antibiotic therapy (oral vs. intravenous), venue of treatment (inpatient vs. outpatient), and duration of antibiotic therapy. generally, the risk for serious infection is directly related to the degree and duration of neutropenia. pediatric patients with mild (anc ≥500) and brief periods of neutropenia (<7 days) are less likely to have infectious complications than those [20, 29, 30] a. dutta and r. flores with moderate to severe neutropenia (anc ≤500) lasting more than 7 days. similarly, the risk for bacteremia and septicemia increases dramatically when the anc is <200. infectious complications that are more common with severe and prolonged neutropenia include bacteremia, pneumonitis, cellulitis, and abscess formation. it is important to consider individual patient risk incorporating the latest recommendations for the management of neutropenic fever in children with cancer and hsct [37, 38] . patients are generally stratified as either high or low risk as follows: 1. high-risk patients -anticipated prolonged (>7 days duration) and profound neutropenia (anc <100 cells/mm 3 following cytotoxic chemotherapy) and/or significant medical comorbid conditions, including hypotension, pneumonia, new-onset abdominal pain, or neurologic changes [20] 2. low-risk patients -anticipated brief (<7 days duration) neutropenic periods in those with no or few comorbidities [20] in addition, risk classification may be based on the multinational association for supportive care in cancer (mascc) score (table 16. 2) [40] . a mascc risk score of ≥21 is recommended as the threshold for definition of low risk, with 6% of such patients developing serious medical complications compared to 39% of those scoring <21 [40] . however the mascc score was developed and validated in adults and has not been validated in a pediatric population. the consensus in the field is for all patients considered to be at high risk by mascc or by clinical criteria to be treated as inpatients with empiric iv antibiotic therapy. carefully selected low-risk patients may be candidates for oral and/or outpatient empiric antibiotic therapy. table 16 .3 summarizes the recommendation for the management of febrile neutropenia based on recommendations of the idsa and the international pediatric fever and neutropenia guideline panel. importantly, in neutropenic febrile patients with an obvious source of infection on clinical exam, management should be tailored to that source. of note, adequate antibiotic stewardship is of utmost importance during the treatment of neutropenic patients in order to decrease the incidence of antibiotic-related adverse drug events, prevalence of antibiotic resistance, and decrease treatment costs. blood cultures must be closely monitored, and once a microorganism has been identified, an appropriate plan for antibiotic de-escalation and/or treatment duration should be promptly instituted. invasive fungal diseases (ifd) are one of the leading causes of morbidity and mortality in pho and hsct patients and present many diagnostic and therapeutic challenges. one of the principal risk factors contributing to the development of ifd relates to the patient's oncologic diagnosis. patients with aml and high-risk and relapsed all, recipients of allogenic hsct, and those with chronic or severe acute graft-versushost disease (gvhd) are at the highest risk of ifd [42, 43] . often a combination of other risk factors is present in these patients which may include prolonged neutropenia, high-dose corticosteroid use, immunosuppressive therapy, parenteral nutrition, presence of a cvc, preceding antibiotic therapy, presence of bacterial coinfection, oral mucositis, and admission to an intensive care unit [44, 45] . the highest risk of ifd is during periods of profound neutropenia which for hsct recipients occurs during the first 30 days posttransplant and during neutrophil engraftment [46] ; for pho patients, the highest risk period is during induction chemotherapy [46] . high-risk patients use monotherapy with an antipseudomonal β-lactam, a fourth generation cephalosporin, or a carbapenem as empirical therapy in pediatric high-risk fn depending on the local prevalence of multidrug-resistant gram-negative rods strong recommendation high-quality evidence reserve addition of a second gram-negative agent or a glycopeptide for patients who are clinically unstable, when a resistant infection is suspected, or for centers with a high rate of resistant pathogens in an era of growing prophylactic antifungal use, children receiving mold-active agents have been shown to be at higher risk of non-aspergillus species fungal infection [43] . voriconazole prophylaxis in adults has been shown to be an independent risk factor for mucormycoses [47] . likewise, breakthrough trichosporonosis has also been reported in patients receiving micafungin as prophylaxis [48] . these phenomena are likely in part related to the selection of fungi with reduced intrinsic susceptibility to the prophylactic agent. the most common ifd are invasive aspergillosis (ia) and invasive candidiasis (ic), with a recent upward trend seen in non-aspergillus mold infections [43] [44] [45] . among aspergillus species, a. fumigatus is the most common, followed by a. flavus and a.niger [45] . among non-aspergillus molds, mucormycoses (rhizopus, mucor, absidia) are most frequently reported followed by a number of other species (e.g., fusarium, scedosporium, curvularia, exserohilum, etc.) [45] . among ic, c. albicans is the single most common candida species, but nonalbicans candida species (especially c. parapsilosis and c. tropicalis) have been increasingly reported among this population [49] . ifd should be suspected in patients with fever and neutropenia lasting for more than 4 days without any identifiable cause [20] . ic can present as septic shock or may have more non-specific findings such as fever, cough, nausea/vomiting, abdominal pain, and cutaneous lesions depending on the site of involvement. in children, the most common sites of ic are the lungs, liver, and spleen, but dissemination can occur to the other organs including the heart, eyes, or brain. disseminated disease is an independent risk factor for death in children with ic [50] . the primary sites of ia are the lungs, skin, and sinuses [45] . the clinical presentation of fungal rhinosinusitis may include fever, rhinorrhea, nasal congestion, and facial pain; many cases, however, may not present with any symptoms and may be diagnosed based on imaging performed in a persistently febrile patient with profound and prolonged neutropenia. cutaneous lesions can present as macules, papules, or nodular ulcerative lesions with or without surrounding erythema and tenderness. clinical presentation secondary to other molds, such as fusarium or scedosporium, is indistinguishable from ia. mucormycoses deserve special mention since dissemination and death are higher due to ifd caused by these species when compared to ia [51] . early recognition and prompt treatment of ifd are crucial for optimal management. diagnostic tests should include blood cultures (though often with low sensitivity), cultures of appropriate sterile sites (such as urine or csf), and diagnostic biopsies of involved sites for culture and histopathology. fungal biomarkers can be used as both a screening test during high-risk periods and adjunct diagnostic test in patients with suspected ifd, especially during the periods of prolonged fever and neutropenia. galactomannan (gm) is a cell wall component released by aspergillus species which can be detected in blood, bronchoalveolar lavage fluid, and cerebrospinal fluid. a cutoff value of a gm optical index of ≥0.5 in blood and a bronchoalveolar lavage fluid level of ≥1 is considered a positive test, though an optimum cutoff value is not well defined in children [52, 53] . invasive fungal disease due to fungi other than aspergillus species may have negative galactomannan tests. β-d-glucan is a cell wall component found in many (but not all) species of fungi, and an elevated serum β-d-glucan assay can be caused by ic, ia, and other molds [53, 54] . the optimum cutoff value of β-dglucan for a positive test is unknown in children, but ≥80 pg/ml is used in most studies [54] . both gm and β-d-glucan assays have variable sensitivity and specificity among children and should be interpreted with caution. the sensitivity of gm has been reported to range from 65 to 82% in children with malignancy and ia [55, 56] ; by contrast the β-d-glucan assay has high sensitivity for ifd (~90%) but suffers from poor specificity [57] . false-positive β-d-glucan can be due to systemic bacterial or viral coinfection, receipt of antibiotics (such as piperacillin-tazobactam or amoxicillinclavulanate), hemodialysis, receipt of albumin or intravenous immunoglobulin, material containing glucan, oral mucositis, and other gi mucosal breakdowns [54] . other pcr-based fungal diagnostic tests are under investigation but have low sensitivity and specificity. gm and β-d-glucan monitoring twice weekly is suggested to evaluate treatment response in those with confirmed/probable disease and as a screening tool in patients at high risk for ifd [52, 53] . all pho and hsct patients with febrile neutropenia that persists beyond 4 days and/or those with suspected ifd should undergo computed tomography of the chest, abdomen, and pelvis and of other areas if indicated [53] . the most common findings on imaging suggestive of ifd are pulmonary nodules, especially those with a halo sign, air crescent sign, or cavitations. hepatosplenic and renal nodules should also raise suspicion of ifd. other studies to consider include an echocardiogram and dilated retinal examination, especially in patients with disseminated candidiasis. if symptoms of sinusitis or new lesions on the palate are present, a prompt nasal endoscopic examination and ct of sinuses are warranted. there are three main classes of antifungals used in patients with ifd: (1) polyenes, which include amphotericin b (amb) and its lipid formulations (liposomal amb is most commonly used in pho and hsct patients); (2) triazoles (fluconazole, itraconazole, voriconazole, and posaconazole); and (3) echinocandins (caspofungin, micafungin, anidulafungin). antifungal prophylaxis should be considered in patients who are at high risk for ifd including hsct recipients and those undergoing intensive remission-induction therapy or salvage-induction therapy [46, 53] . a high incidence of ifd has been reported in children with aml (newly diagnosed and relapsed) [58] and patients with relapsed all [46] , and such patients may be considered candidates for prophylaxis. among hsct recipients, those with an unrelated donor or a partially matched donor are at higher risk of ifd [46] . recent studies show that children with aml receiving antifungal prophylaxis have reduced rates of induction mortality and resource utilization compared to those who did not receive prophylaxis [59] . posaconazole was found to be superior to fluconazole or itraconazole in reducing incidence of ifd in children [60] . echinocandins have been shown to be as or more effective for ifd prophylaxis than triazoles, especially in hsct recipients, with less adverse effects and can be an alternative option for prophylaxis [46] . the idsa and the european conference on infections in leukemia (ecil-4) recommend using posaconazole, voriconazole, or micafungin during prolonged neutropenia to prevent ifd [20, 53] . posaconazole is recommended for prophylaxis in patients with gvhd who are at high risk of ia [53] . variable absorption of oral azoles in children should be taken into consideration when choosing oral antifungals. for patients with prolonged fever and neutropenia without an alternative explanation, consideration must be given to the possibility of an active fungal infection. empiric antifungal therapy should be considered for neutropenic patients with persistent or recurrent fevers after 4-7 days of antibiotic therapy and whose overall duration of neutropenia is expected to be >7 days [20] . in low-risk patients, routine use of empiric antifungals is not recommended [20] . liposomal amphotericin b or an echinocandin, both of which are fungicidal, are the first-line therapy for empiric antifungal treatment [20] . there is insufficient data to provide specific guidance for patients with concern for a new fungal infection who are already receiving moldactive (i.e., anti-aspergillus) prophylaxis; however, some experts suggest switching to a different mold-active antifungal [18] . surgical debridement of any fungal lesions or abscesses and prompt removal of cvc in the event of fungemia are crucial to reduce the progression of ifd. therapeutic drug monitoring (tdm) should be performed for patients receiving voriconazole, itraconazole, and posaconazole. there is extreme variability in triazole serum levels among pediatric patients owing to diversity in bioavailability in this population. for voriconazole tdm, a serum trough level between 1 and 5 mcg/dl has been considered safe and effective in preventing breakthrough ifd in children [53] . for posaconazole, a trough level of 0.7 mg/l-1 mg/l has been shown to be effective [53] . due to increased toxicity associated with vinca alkaloids, high doses of cyclophosphamide, and anthracyclines, azoles should not be co-administered with these agents. the antifungal agents most commonly used in children with pho and hsct and their indications are noted below (table 16 .4). although combination antifungals are not well studied in children, they are used frequently in this population. pediatric data are variable regarding the benefit of combination antifungal therapy but overall report an increase in adverse events [45] ; the risk of systemic toxicity must therefore be taken into account when considering the use of antifungal combinations. combination therapy could be considered in patients with refractory disease or as salvage therapy. granulocyte transfusions for profound or persistent neutropenia, adjunctive cytokines (e.g., granulocyte colony-stimulating factor [gcsf]), and reduction of immunosuppression and tapering of steroids are recommended as an adjunct to antifungal agents in the treatment of ifd [20] . in summary, children and adolescents with malignancy have additional risk factors for healthcare-associated infections. meticulous attention to personal and oral hygiene, diet, environmental safety, and appropriate immunizations should be practiced in this high-risk population. the use of antimicrobial prophylaxis should be considered in periods of severe neutropenia to prevent bacterial and fungal infections as necessary. prompt diagnosis and management strategies to prevent infectious complications are key to preventing morbidity and mortality in these immunocompromised hosts. daily bathing with chlorhexidine and its effects on nosocomial infection rates in pediatric oncology patients infection prevention in the cancer center oral and dental considerations in pediatric leukemic patient guideline on dental management of pediatric patients receiving chemotherapy, hematopoietic cell transplantation, and/or radiation pathogen distribution and antimicrobial resistance among pediatric healthcareassociated infections reported to the national healthcare safety network antibiotic use during infectious episodes in the first 6 months of anticancer treatment-a swedish cohort study of children aged 7-16 years mucosal barrier injury laboratory-confirmed bloodstream infection: results from a field test of a new national healthcare safety network definition the centers for disease control and prevention definition of mucosal barrier injury-associated bloodstream infection improves accurate detection of preventable bacteremia rates at a pediatric cancer center in a low-to middle-income country strategies to prevent central line-associated bloodstream infections in acute care hospitals: 2014 update guidelines for the prevention of intravascular catheter-related infections preventing clabsis among pediatric hematology/oncology inpatients: national collaborative results rapid cycle development of a multifactorial intervention achieved sustained reductions in central line-associated bloodstream infections in haematology oncology units at a children's hospital: a time series analysis guidelines for preventing infectious complications among hematopoietic cell transplantation recipients: a global perspective high rates of potentially infectious exposures between immunocompromised patients and their companion animals: an unmet need for education pet ownership in immunocompromised children--a review of the literature and survey of existing guidelines should immunocompromised patients have pets? antibiotic prophylaxis for patients with acute leukemia levofloxacin to prevent bacterial infection in patients with cancer and neutropenia antibacterial prophylaxis after chemotherapy for solid tumors and lymphomas clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: 2010 update by the infectious diseases society of america a pilot study of prophylactic ciprofloxacin during delayed intensification in children with acute lymphoblastic leukemia levofloxacin prophylaxis during induction therapy for pediatric acute lymphoblastic leukemia clinical and microbiologic outcomes of quinolone prophylaxis in children with acute myeloid leukemia effect of levofloxacin prophylaxis on bacteremia in children with acute leukemia or undergoing hematopoietic stem cell transplantation: a randomized clincial trial updated recommendations of the advisory committee on immunization practices for healthcare personnel vaccination: a necessary foundation for the essential work that remains to build successful programs redbook: report of the committee on infectious diseases guidelines for environmental infection control in health-care facilities. recommendations of cdc and the healthcare infection control practices advisory committee (hicpac) management of fever in neutropenic patients with different risks of complications repeated blood cultures in pediatric febrile neutropenia: would following the guidelines alter the outcome? pediatr blood cancer etiology and clinical course of febrile neutropenia in children with cancer changes in the etiology of bacteremia in febrile neutropenic patients and the susceptibilities of the currently isolated pathogens contemporary antimicrobial susceptibility patterns of bacterial pathogens commonly associated with febrile patients with neutropenia emergence of carbapenem resistant gram negative and vancomycin resistant gram positive organisms in bacteremic isolates of febrile neutropenic patients: a descriptive study changing epidemiology of infections in patients with neutropenia and cancer: emphasis on gram-positive and resistant bacteria recent changes in bacteremia in patients with cancer: a systematic review of epidemiology and antibiotic resistance management of febrile neutropenia in malignancy using the mascc score and other factors: feasibility and safety in routine clinical practice guideline for the management of fever and neutropenia in children with cancer and hematopoietic stem-cell transplantation recipients: 2017 update guidelines for the use of antimicrobial agents in neutropenic patients with cancer outpatient management of fever and neutropenia in adults treated for malignancy: american society of clinical oncology and infectious diseases society of america clinical practice guideline update summary the multinational association for supportive care in cancer risk index: a multinational scoring system for identifying low-risk febrile neutropenic cancer patients clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: 2010 update by the infectious diseases society of america invasive mycoses in children receiving hemopoietic sct a prospective, international cohort study of invasive mold infections in children epidemiology and outcomes of invasive fungal infections in allogeneic haematopoietic stem cell transplant recipients in the era of antifungal prophylaxis: a singlecentre study with focus on emerging pathogens invasive mold infections in pediatric cancer patients reflect heterogeneity in etiology, presentation, and outcome: a 10-year, single-institution, retrospective study antifungal prophylaxis in pediatric hematology/oncology: new choices & new data. pediatr blood cancer breakthrough zygomycosis after voriconazole treatment in recipients of hematopoietic stem-cell transplants trichosporonosis in pediatric patients with a hematologic disorder results from a prospective, international, epidemiologic study of invasive candidiasis in children and neonates risk factors for mortality in children with candidemia invasive mucormycosis in children: an epidemiologic study in european and non-european countries based on two registries practice guidelines for the diagnosis and management of aspergillosis: 2016 update by the infectious diseases society of america ecil-4): guidelines for diagnosis, prevention, and treatment of invasive fungal diseases in paediatric patients with cancer or allogeneic haemopoietic stem-cell transplantation clinical practice guideline for the management of candidiasis: 2016 update by the infectious diseases society of america threshold of galactomannan antigenemia positivity for early diagnosis of invasive aspergillosis in neutropenic children galactomannan antigenemia in pediatric oncology patients with invasive aspergillosis beta-d-glucan screening for detection of invasive fungal disease in children undergoing allogeneic hematopoietic stem cell transplantation guideline for primary antifungal prophylaxis for pediatric patients with cancer or hematopoietic stem cell transplant recipients antifungal prophylaxis associated with decreased induction mortality rates and resources utilized in children with new-onset acute myeloid leukemia antifungal prophylaxis with posaconazole vs. fluconazole or itraconazole in pediatric patients with neutropenia key: cord-252870-52fjx7s4 authors: xie, kefan; liang, benbu; dulebenets, maxim a.; mei, yanlan title: the impact of risk perception on social distancing during the covid-19 pandemic in china date: 2020-08-27 journal: int j environ res public health doi: 10.3390/ijerph17176256 sha: doc_id: 252870 cord_uid: 52fjx7s4 social distancing is one of the most recommended policies worldwide to reduce diffusion risk during the covid-19 pandemic. based on a risk management perspective, this study explores the mechanism of the risk perception effect on social distancing in order to improve individual physical distancing behavior. the data for this study were collected from 317 chinese residents in may 2020 using an internet-based survey. a structural equation model (sem) and hierarchical linear regression (hlr) analyses were conducted to examine all the considered research hypotheses. the results show that risk perception significantly affects perceived understanding and social distancing behaviors in a positive way. perceived understanding has a significant positive correlation with social distancing behaviors and plays a mediating role in the relationship between risk perception and social distancing behaviors. furthermore, safety climate positively predicts social distancing behaviors but lessens the positive correlation between risk perception and social distancing. hence, these findings suggest effective management guidelines for successful implementation of the social distancing policies during the covid-19 pandemic by emphasizing the critical role of risk perception, perceived understanding, and safety climate. as the number of global coronavirus cases explodes rapidly, threatening millions of lives, the covid-19 pandemic has become the fastest spreading, most extensive, and most challenging public health emergency worldwide since world war ii [1] . compared to seasonal influenza, this coronavirus appears to be more contagious and transmits much faster. for example, the basic reproduction rate r 0 for seasonal influenza is approximately 1.28, while for covid-19, this value comprises 3.3 on average [2] [3] [4] . with no efficacious treatments and vaccines available yet, social distancing measures are still one of the common approaches to reduce the rate of infection. moreover, for the foreseeable multiple waves of the pandemic, covid-19 prevention will continue to rely on physical distancing behaviors until safe vaccines or effective pharmacological interventions become accessible. accordingly, social distancing has been implemented by authorities across the globe to prevent diffusion of the disease. facing this global pandemic, even each government has issued advice about mobility restriction, the definition of social distancing, and distancing rules. however, the guidance documents differ social distancing has received increasing attention in numerous studies over recent decades, especially since the covid-19 outbreaks. in order to explore critical points and network patterns of these prior research studies, a co-word analysis was conducted. the literature keywords present the relationship between the study subjects and a concentration of the research content [18] . hence, the application of a co-word analysis on the existing literature can provide generic knowledge and network patterns in the studies on social distancing. an integrated search was conducted given the topic of social distancing, such as "physical distancing", "social isolation", "lockdown", etc. subsequently, 978 related papers published from 1 january 2000 through 28 june 2020 were retrieved using the web of science core database. then, using citespace software, which is designed as a tool for progressive knowledge domain visualization [19] , the co-occurrence matrix of keywords was calculated and visualized, as shown in figure 1 . the size of the keywords presents the frequency of co-occurrence and the connection shows the significance of co-occurrence [20] . based on the co-word analysis, the major research focus and inner bibliometric characteristics of social distancing were concluded from four perspectives, such as how social distancing affects the pandemic, the additional effects and challenges caused by social distancing, modeling and simulation of social distancing, and influencing factors. most of the previous studies [21] confirmed that social distancing has positive effects on the pandemic slowdown while several studies [22] seem not to confirm this. some studies believe that social distancing cuts off the transmission path of the virus, thereby reducing r 0 [23] . moreover, different mathematical models and simulations have displayed a good correlation with the data showed in biomedical studies, which offered a high level of evidence for the impact of social distancing measures to contain the pandemic [24, 25] . for example, based on simple stochastic simulations, cano et al. [26] evaluated the efficiency of social distancing measures to tackle the covid-19 pandemic. okuonghae and omame [27] found if at least 55% of the population would implement social distancing measures, the pandemic will eventually disappear according to the numerical simulations of the model. nevertheless, a systematic review and meta-analysis demonstrated that the social distancing regulation showed a non-significant protective effect, which can be caused by the persisting knowledge gaps in disparate population groups [22] . although various cohort studies and modeling simulations have found that the social distance regulations can effectively prevent the spread of the pandemic, the additional effects and challenges caused by social distancing cannot be ignored. for instance, anxiety associated with social distancing may have a long-term effect on mental health [28] and social inequality. furthermore, loneliness pandemics are arising from physical isolation as well [29] . as a form of reduced movement and face-to-face connections between people, social distancing has changed residents' conventional health behaviors, which may lead to increasing obesity, accidental pregnancies, and other health risks [30, 31] . a national survey carried out in italy demonstrated that individual needs shifted towards the three bottom levels of the maslow's pyramid (i.e., belongingness and love needs, safety needs, and physiological needs) due to the social isolation [32] . compared with the impact of social distancing, more previous studies focused on its influencing factors. first, at the national and cultural dimension levels, akim and ayivodji [33] concluded that certain economic and fiscal interventions were associated with higher compliance with social distancing. huynh [5] found that countries with higher "uncertainty avoidance index" indicate a lower proportion of public gatherings. likewise and moon [34] explored the role of cultural orientations and showed that vertical collectivism predicted stronger compliance with social distancing norms. then, at the level of public society, aldarhami et al. [35] conducted a survey indicating that the high level of public awareness affects social distancing implementation. besides, public health authorities and experts alike pointed out that mass media and information played an important role in developing public awareness and constructing social distancing behaviors among social populations [36] . lastly, from the perspective of individual behaviors and psychological factors, oconnell et al. [1] reported that more antisocial individuals may pose a health risk to the public and engage in fewer social distancing regulations. based on a cross-sectional online survey, yanti et al. [37] identified that the respondents who had sufficient knowledge and a good attitude would positively comply with safety behaviors, such as keeping a physical distance from others and wearing face masks in public places. although the evidence unambiguously supported that implementing the social distancing regulations has a crucial effect on restraining the pandemic [38] , recent studies found that mobility restrictions do not lead to an expected reduction of coronavirus cases [8, 39] . previous literature has conducted various analyses regarding the different factors motivating social distancing behaviors. however, facing the current enormous gap between the method and the existing practice, limited research has paid attention to the key factors from the perspective of risk management. because of the significant role that individuals and public awareness play in compliance with social distancing, this study focuses on the mechanism of the risk perception effect on social distancing. individual's perceived understanding and safety climate are also examined to identify their effectiveness in the relationship between risk perception and social distancing. based on a quantitative online survey with a sample size of 317 participants from china over the period of may 2020, we built the structural equation model (sem) and conducted hierarchical linear regression (hlr) analysis to examine how the selected moderators influence social distancing behavior. the remainder of the paper is organized as follows. section 2 will review the risk perception theories and develop several hypotheses with the conceptual framework. section 3 describes the research methodology, data collection, and measurement of latent variables. then, we analyze the data and examine hypotheses (section 4) and finally, discuss the implications and limitations of our findings (section 5) as well as draw the main conclusions (section 6). risk exists objectively, but distinct people will take different behavioral decisions when they perceive risk differently [40] . hence, even many medical experts stressed the importance of maintaining physical distancing amid the covid-19 pandemic and people's risk perception still colors beliefs about facts. the concept of risk perception differs among different disciplines [41] . in this study, risk perception in the context of the pandemic is defined as the psychological processes of subjective assessment of the probability of being infected by the coronavirus, an individual's perceived health risk, and available protective measures [42, 43] . compared to the concept of risk perception in other fields, the health risk perception and the severity caused by the consequences of subsequent behavioral decisions are the most prominent features. empirical evidence has indicated that health risk perception may significantly affect people's self-protective behaviors and increase negative consequences of health risks [44] . dionne et al. [45] found that risk perception associated with medical activities was a critical predictor of the epidemic prevention behaviors. accordingly, as reported, underestimation of the pandemic knowledge and health risks could lead to decreasing implementation of social distancing. most previous research focused on identifying influencing factors for people's health risk perception as risk perception largely determines whether individuals would take protective measures during the pandemic. also, there are various factors that reduce the substantial deviation between the actual objective risk and subjective feelings. perceived understanding is just one of the crucial factors that refers to situational awareness for the adoption of healthcare protections when facing the pandemic [46] . according to the theory of planned behavior, only when people realize that they are in a health risk or even death risk will they have the situational awareness to take further healthcare protections. effective and timely perceived understanding will greatly promote people to translate risk perception into actual actions [47] . perceived understanding plays a vital role in the adoption of healthcare behaviors. therefore, the following four hypotheses were developed, considering the findings from previous studies. perceived understanding about the covid-19 pandemic plays a mediating role between risk perception and social distancing behavior. facing huge economic pressure and public opinion, many companies and organizations gradually re-opened. at the same time, these institutions require their employees to implement the social distancing policies strictly. similarly, when people go out to eat, shop, and entertain, many public places remind people to maintain a physical distance. regardless of whether it is a social organization or a public place, this kind of a reminder message released through information media has virtually created a safe climate to require people to take necessary measures and reduce the spread of the virus. generally, the safety climate refers to individuals' perception of safety regulations, procedures, and behaviors in the workplace [48] . from the perspective of pandemic prevention and control, the safety climate relates to a consensus created by the work environment which will promote people consciously or unconsciously to take the appropriate safety measures. namely, safety climate reflects common awareness among employees on the importance of organizational safety issues [49] . numerous observations and studies attest to the relationship between safety climate and protective behavior. bosak et al. [50] found that a good safety climate was negatively related to people's risk behaviors. moreover, another study showed that safety climate completely mediated the effect of risk perception on safety management [49] . however, few studies focused on the influence of safety climate on people's self-protection behavior during the pandemic. taking protective measures, such as social distancing, wearing face masks, and other self-prevention behaviors, are instrumental to avoid the spread of the infection. an organization with a good safety climate can carry out relevant safety training and drills, so as to suppress the potential risk tendency and promote their employees' safety behaviors. therefore, if the working environment can strengthen the education and publicity of pandemic knowledge, people are more willing to take correct protective measures, such as maintaining a social distance. additionally, koetke et al. [51] also pointed out that safety climate (trust in science) played a moderating role in the relationship between conservative and social distancing intentions. to conclude, based on the above literature reviews, the conceptual framework of this study is illustrated in figure 2 . our last two hypotheses read as follows: according to the 44th china statistical report on internet development, which was announced by the china internet network information center (cnnic), in 2019, there were 854 million internet users in china. several studies exploring some physical or psychological influencing mechanisms, such as risk perception, showed no significant difference between internet users and non-users [52] . therefore, online questionnaires were randomly collected from internet users through wenjuan.com. a total of 317 completed responses were received with an effective rate of 94.63%, after excluding suspected unreal answers completed in less than 60 s. additionally, participants were first directed to review and provide their consent using an online informed consent form, which was pre-approved by a panel of experts and the institutional review board, before answering the survey questionnaire. the data collection was anonymously conducted throughout may 2020. the female participants constituted 48.3% of the sample, while 51.7% of the sample were male participants. among the respondents, most of them were young people, 31.9% belonged to the age group of 18-24 years, while 40.7% belonged to the age group of 25-39 years. a total of 84.5% of the participants had a college degree or above and only 6% had a lower level education than high school. out of the total sample, 48.6% reported to be living in rural areas and 51.4% lived in urban communities. it should be noticed that there were 15.14% of the participants living in hubei province, which used to be the epicenter of the covid-19 pandemic in china. the initial questionnaire contained 22 questions to measure these 4 latent variables, including risk perception-rp (7 items), perceived understanding-pu (4 items), social distancing-sd (5 items), and safety climate-sc (6 items). all the measurement items were prepared based on the review of related literature and methods (table 2) . for example, initial items for rp were generated following previous questionnaires conducted by dionne et al. [45] and kim et al. [53] . measurement items of pu were compiled based on the infectious disease-specific health literacy scale [54] and the study by qazi et al. [46] . the sc instrument statements were taken from the literature review and previously completed research [51, 55, 56] . based on the studies of swami et al. [57] and gudi et al. [58] , initial measurement questions of sd were developed. additionally, to ensure the validity of the draft questionnaire, the original survey instrument statements were revised based on the suggestions from a panel of experts, including 5 professionals of risk management, 5 public health specialists, and 5 community managers. then, necessary modifications were made by simplifying, rewording, and replacing several items after 15 experts reviewed the survey structure, wording, and item allocation. according to the expert panel's feedback, the item-level content validity index (i-cvi) of the 18 items were all greater than 0.78 and the scale-level cvi (s-cvi) is 0.97 (>0.90), indicating an excellent validity of this scale (see supplementary materials ). an initial survey with 22 items was first pilot tested among a randomly selected sample of 100 internet users. after conducting cognitive interviews with the pilot sample participants and analyzing the reliability and correlations, 4 measurement items (rp5, rp6, rp7, and sd5) with a item-to-total correlation below 0.5 were removed. finally, a formal questionnaire containing 18 items was developed. the response scale for all the survey items was a 5-point likert scale with categories ranging from 1 = "strongly disagree" to 5 = "strongly agree". all of the items were phrased positively, so that a higher score represented stronger agreement. table 2 displays an overview of the scale and questionnaire items. avoid contact with individuals who have influenza. avoid traveling within or between cities/local regions. avoid using public transport due to covid-19. avoid going to crowded places due to covid-19. * safety climate the government is concerned about the health of people. koetke et al. [51] ; neal et al. [55] ; wu et al. [56] sc2 i trust the covid-19 information provided by the government. there is a clearly stated set of goals or objectives for covid-19 prevention. people consciously follow the pandemic prevention regulations. being able to provide necessary personal protective equipment for workers during the pandemic. offering to workers as much safety instruction and training as needed during the pandemic. note: * items removed from the initial questionnaire. descriptive statistics and correlation analyses of the latent variables were first examined. then, the exploratory factor analysis (efa) and the confirmatory factor analysis (cfa) were conducted to verify the unidimensionality and reliability of the measurement items. the sem can be applied to control for measurement errors as well as to use parameters to identify interdependencies [2, 50] . hence, this approach is appropriate to test the hypotheses by conducting the path analyses. in addition, to examine the moderating effect, hlr was carried out to verify hypotheses h5 and h6. amos version 24.0 software was applied for cfa and sem (hypotheses h1-h4) . the remaining analyses, e.g., efa and hlr (hypotheses h5 and h6) , were done using spss 22.0. (ibm, armonk, ny, usa) the means, standard deviations (s.d.), and inter-correlations of all the measures are contained in table 3 . there are significant positive correlations between the four variables. rp has significant positive correlations with sd and pu, suggesting a partial support for hypotheses h1 and h2, respectively. moreover, both pu and sc showed a significant positive correlation with sd, indicating that hypotheses h3 and h5 were partially supported as well. reliability can be formally defined as the proportion of observed score variance, which is attributable to the true score variance. there exist several approaches to evaluate the reliability of a measuring item and internal consistency is the most widely used method in research with a cross-sectional design. the cronbach's alpha (î±) can be used to estimate the internal consistency [59] . a standard value for cronbach's alpha is 0.70 or above, which indicates strong internal consistency of adopted scales [60] . table 4 indicates that all four latent variables have good reliability (cronbach's î± > 0.7), suggesting that the measurement items are appropriate indicators of their respective constructs. the validity analysis is used to examine the accuracy of the measurement instrument, namely the validity of the scale. the validity analysis mainly includes the content validity and the construct validity, of which the content validity has been supported by the expert panel's recommendations and pre-tests, while the construct validity requires a combination of efa and cfa. first, the kaiser-meyer-olkin (kmo) test value was 0.888. in addition, the result of the bartlett test (ï� 2 = 3135.94, df = 153, p < 0.001) was large and significant. hence, the data shown in table 4 were suitable for cfa. then, the measurement items identified four factors that exactly correspond to four latent variables. these four factors explained 66.41% of the total variance. similarly, the cfa results confirmed the four-factor model. in this study, the goodness-of-fit statistics were found to be x 2/ df = 2. (1) and (2): where î» i and ï� 2 e i represent the regression weight (factor loading) and measure variance estimate of the measurement item i, respectively, and k is the number of measurement items. cr and ave are other effective measures to evaluate the construct validity. correspondingly, according to jobson [61] , the acceptable value of cr is 0.7 and above, while ave should be 0.5 and above. table 4 demonstrates that most of the values of cr and ave met the standards, suggesting an acceptable goodness-of-fit for the further sem analysis. based on the conceptual framework, the sem analysis was conducted to explore the relationship between rp, sd, and pu (as the mediator). the hypothesized model shown in figure 3 was first examined. table 5 summarizes the fit indices of the model, which indicates an excellent goodness-of-fit for the data based on the majority of indices. in this model, several path analyses were developed to test hypotheses h1, h2, and h3. as shown in table 6 , rp has significant positive relationships with pu (î² = 0.296, c.r. = 4.435, p < 0.001) and sd (î² = 0.238, c.r. = 4.421, p < 0.001). likewise, pu plays a significant positive role on sd (î² = 0.581, c.r. = 8.426, p < 0.001) as well. thus, it implies that hypotheses h1, h2, and h3 are supported. bias-corrected (bc) and percentile (pc) bootstrapping approaches were carried out to verify the mediating effect of pu. previous studies have found that bootstrapping was a proper method that can provide a robust test of mediating hypotheses [62] . accordingly, the significant effect of risk perception on social distancing could be assessed through perceived understanding by using the bootstrapping of 5000 sub-samples. as can be seen from table 7 , the values of the lower and upper limits (95% bc and pc bootstrap confidence intervals) for the indirect effect (î² = 0.100) were all greater than zero. moreover, the value of z (indirect effect/standard error) equals 2.5 (>1.96). subsequently, similar to an indirect effect, it was found that there were no zero values between the lower and upper limits (95% bc and pc bootstrap confidence intervals) for the direct effect (î² = 0.138, z = 3.45). therefore, perceived understanding partially mediates the positive effects of risk perception on social distancing. in other words, perceived understanding did not completely offset the effect of risk perception, which partially explains the social distancing. in summary, these results confirmed hypothesis h4. hypothesis h6 predicted that safety climate positively moderates the impact of risk perception on social distancing. to test the moderation effects, the hlr analysis was conducted. model 1 serves as a baseline with independent variables rp and sc. then, model 2 incorporated additional variables rpã�sc. table 8 presents the significant interaction effects of the two-way interaction effect between rp and sc on sd (model 2, rpã�sc, î² = â��0.242, p < 0.001). as shown in table 8 , while risk perception is positively associated with social distancing regardless of the value of safety climate, the safety climate further reduces the positive effect. thus, hypothesis h6 is partially supported. additionally, whether sc is in model 1 (î² = 0.566, p < 0.001) or model 2 (î² = 0.4689, p < 0.001), it presents a statistically significant positive relationship with sd, which further supports hypothesis h5. note: *** p < 0.001. vif represents variance inflation factor (vif = 1/tolerance), vif < 5 (acceptable). this study has continued to demonstrate that social distancing behaviors play a critical role in preventing the diffusion of the covid-19 pandemic. in identifying influencing factors that lead to social distancing, previous studies have highlighted risk perception as a leading indicator of protective behaviors [42, 44, 45] . people should be encouraged to promote risk perception in order to identify and rectify infection risks and health issues related to unprotected behaviors during the covid-19 pandemic. however, limited research has examined whether different risk perception of individuals affects their interpretation of the social distancing regulations in an equivalent manner. by investigating the measurement scales of risk perception, perceived understanding, safety climate, and social distancing across populations of internet users in china, this study addressed the mechanism of the risk perception effect on social distancing to improve individuals' physical distancing behaviors. this study provided evidence that risk perception and perceived understanding can significantly affect people's social distancing behaviors during the covid-19 pandemic. the results of the path analysis supported hypotheses h1, h2, and h3. it is evident from figure 3 , tables 5 and 6 that the path coefficients are significant and the overall hypothesized model has a good fit for the investigation. these findings are in line with aldarhami et al. [35] , zhong et al. [63] , and machida et al. [64] . a key principle of social distancing behavior is that risk perception is a critical condition for protective action. the results support the finding that higher risk perception motivates people to comply with social distancing. only by enhancing risk perception can people truly remain vigilant against the pandemic and take protective measures. therefore, when the government implements social distancing and other prevention measures, it must take into account the public risk perception and improve public environmental awareness through various means, such as social media, press conferences, standard therapy, and guidelines for the outbreak response. in particular, it is necessary to rectify pandemic rumors to prevent incorrect information that can potentially reduce public risk perception. besides, we confirmed a dual effect of perceived understanding on social distancing. first, perceived understanding was found to predict social distancing directly. these results are consistent with other studies [1, 46] which have shown that increased perceived understanding can encourage people to gain more knowledge about the pandemic and health risks, so that they would engage more in the social distancing regulations. then, we identified that perceived understanding as a factor showed an incomplete mediating effect on the relationship between risk perception and social distancing. previous literature regarding perceived understanding shows that it affects the social distancing behaviors related to the sources of information [46] . on the other hand, our results confirm an indirect positive effect of risk perception on social distancing through perceived understanding. hence, with the help of the authority of medical experts, we should promptly popularize scientific knowledge of the pandemic and prevention measures among communities to enhance public perceived understanding. in addition, the increase in risk perception can promote public desire to understand the pandemic and pay more attention to their own health risks. the authorities should improve pandemic information release channels. moreover, we identified that a positive perception of safety climate (î² = 0.566, p < 0.001) would promote adherence to social distancing and that this effect would be stronger than the risk perception (î² = 0.165, p < 0.001). this finding concurs with the study conducted by kouabenan et al. [49] . the achievement of a consensus on a safe climate requires the joint efforts of the organization and society. first, workplaces such as shops, cafeterias, office spaces, and public transit systems have to strengthen pandemic prevention and control drills. then, it is necessary to support community propaganda and scientific knowledge popularization and gather the individual consensus on self-protective behaviors. it is also strongly recommended to wear a face mask, keep a 2 m physical distance between workers, and use sanitary measures in public venues. finally, we demonstrated that safety climate, risk perception, and social distancing are the interacting factors, supporting our hypothesis that a moderating effect of safety climate on the relationship between risk perception and social distancing exists, as found in kouabenan et al. [49] , bosak et al. [50] , and koetke et al. [51] (see hypothesis h6). however, we did not find that safety climate increased the degree to which the risk perception positively affects social distancing. as shown in figure 4 , risk perception was positively related to social distancing under the conditions of a high safety climate as well as under the conditions of a low safety climate. more importantly, we found that safety climate is a factor that lessens the positive correlation between risk perception and social distancing. this moderating effect improves our understanding of the contexts in which risk perception affects social distancing. yet, as described by kouabenan et al. [49] , the safety climate was viewed as the key factor because it completely mediated the effect of perceived risk on safety behavior. one potential explanation for this difference of findings is the complex content of safety climate measurement items, because it actually includes three clauses. compared to the previous studies, we regarded the safety climate as the whole of social consciousness. the overall promotion of social protection awareness will replace the role of risk perception and may lead to compliance with social distancing through the public herd effect. therefore, while focusing on the importance of risk perception, we cannot ignore the positive incentives for social distancing brought by a good safety climate. in addition to enhancing employees' consensus on pandemic prevention, qualified organizations can physically isolate workspaces and public venues in time and space. for example, people should avoid going out for mass gatherings (lunches, shopping, traveling, education, leisure, etc.). then, for management commitment, they should physically divide the restaurant space, office space, and other public areas to ensure that people have sufficient isolation distance. flexible work scheduling, online office hours, and e-learning are encouraged for implementation. conclusively, application of innovating social distance management technologies (e.g., technologies that are based on an emerging range of ict technologies [65] like bluetooth, radio frequency identification, cloud mobile, and others) can assist with achieving an accurate measurement of the physical distance between individuals and momentarily reminding people to maintain a social distance as needed. in public venues, such as dining areas, using multimedia, posters, and ground stickers with social distancing reminders can create a good safety climate. although substantial efforts were put into this study to ensure the reliability and validity of the results, a few limitations still exist, which might be explored in further research. first, our sample does consist of chinese internet users but may not have all the attributes that perfectly match the characteristics of the current chinese population. without collecting data from other regions and having a representative sample, the generalizability of our findings is limited to a certain extent. a cross-regional, more representative study with a bigger sample size could be used in future studies in order to improve accuracy and generalizability of the results. second, we measured all the latent variables with a simple one-dimensional factor by using a cross-sectional design. the results could neither exclude the possibility of reverse causation nor prove the exact cause-and-effect relationships from a cross-sectional survey design. hence, further study could be extended by collecting longitudinal data through multiple rounds of experiments. furthermore, several previous studies measured risk perception from a multi-dimensional perspective. therefore, it would be meaningful to present risk perception as a multi-dimension construct, developing a multi-item scale to promote reliability and validity. moreover, this study takes into consideration risk perception that creates social distancing for the adoption of risk management. some other factors, like knowledge and beliefs of the covid-19 pandemic, mask-wearing, self-awareness in prevention of covid-19, number of confirmed covid-19 cases in a given region, death rate in a given region, and percentage of elderly population in a given region, can also be included in further research. finally, we considered the mediating and moderating effects of perceived understanding and safety climate. as contingent factors, these effects may interact with other factors, shifting the results conducted in the present study. besides, several control variables that are associated with population demographics, such as gender, age, and education level, did not show a significant impact on the relationships among these latent variables. this subject, however, is worth exploring in further research. this study investigated the impact of risk perception on social distancing during the covid-19 pandemic. based on the data collected from an online survey among 317 participants in china throughout may 2020, our analyses indicate that positive changes in social distancing behaviors are associated with increased risk perception, perceived understanding, and safety climate. the individual's perceived understanding partly plays a positive mediating role in the relationship between risk perception and social distancing behaviors. furthermore, the safety climate plays a negative role in the relationship between risk perception and social distancing because the safety climate seems to mitigate the effects of risk perception on social distance. hence, effective health promotion strategies directed at developing or increasing positive risk perception, perceived understanding, and safety climate should be conducted to encourage people to comply with the social distancing policies amid these unprecedented times. finally, these results are expected to contribute to management guidelines at the level of individual perception and public opinions as well as to assist with effective implementation of the social distancing policies in countries with a high risk of the covid-19 pandemic. pandemic is associated with antisocial behaviors in an online united states sample tracking changes in sars-cov-2 spike: evidence that d614g increases infectivity of the covid-19 virus early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia the reproductive number of covid-19 is higher compared to sars coronavirus does culture matter social distancing under the covid-19 pandemic? social distancing: how religion, culture and burial ceremony undermine the effort to curb covid-19 in south africa airborne or droplet precautions for health workers treating covid-19? covid-19 and the social distancing paradox: dangers and solutions physical distancing for coronavirus (covid-19). available online how to slow the spread of covid-19 basic policies for novel coronavirus disease control by the government of japan staying alert and safe (social distancing) what is social distancing and how can it slow the spread of covid-19? available online covid-19) advice for the public detecting and visualizing emerging trends and transient patterns in scientific literature visualizing and exploring scientific literature with citespace: an introduction how many ways to use citespace? a study of user interactive events over 14 months effectiveness of workplace social distancing measures in reducing influenza transmission: a systematic review effectiveness of personal protective measures in reducing pandemic influenza transmission: a systematic review and meta-analysis on the role of governmental action and individual reaction on covid-19 dynamics in south africa: a mathematical modelling study social distancing simulation during the covid-19 health crisis impact of social distancing measures for preventing coronavirus disease 2019 [covid-19]: a systematic review and meta-analysis protocol covid-19 modelling: the effects of social distancing analysis of a mathematical model for covid-19 population dynamics in mental morbidity arising from social isolation during covid-19 outbreak reconceptualizing social distancing: teletherapy and social inequality during the covid-19 and loneliness pandemics social distancing as a health behavior: county-level movement in the united states during the covid-19 pandemic is associated with conventional health behaviors love in the time of covid-19: sexual function and quality of life analysis during the social distancing measures in a group of italian reproductive-age women a nation-wide survey on emotional and psychological impacts of covid-19 social distancing interaction effect of lockdown with economic and fiscal measures against covid-19 on social-distancing compliance: evidence from africa explaining compliance with social distancing norms during the covid-19 pandemic: the roles of cultural orientations, trust and self-conscious emotions in the us public perceptions and commitment to social distancing during covid-19 pandemic: a national survey in saudi arabia third-person effect and pandemic flu: the role of severity, self-efficacy method mentions, and message source community knowledge, attitudes, and behavior towards social distancing policy as prevention transmission of covid-19 in indonesia changes in contact patterns shape the dynamics of the covid-19 outbreak in china lockdown strategies, mobility patterns and covid-19 the risk concept-historical and recent development trends risk assessment and risk management: review of recent advances on their foundation risk perception through the lens of politics in the time of the covid-19 pandemic risk perception in fire evacuation behavior revisited: definitions, related concepts, and empirical evidence effects of news media and interpersonal interactions on h1n1 risk perception and vaccination intent health care workers' risk perceptions and willingness to report for work during an influenza pandemic analyzing situational awareness through public opinion to predict adoption of social distancing amid pandemic covid-19 nurses' use of situation awareness in decision-making: an integrative review relationships between psychological safety climate facets and safety behavior in the rail industry: a dominance analysis safety climate, perceived risk, and involvement in safety management safety climate dimensions as predictors for risk behavior trust in science increases conservative support for social distancing. osf 2020, cngq8 an assessment of the generalizability of internet surveys the effects of risk perceptions related to particulate matter on outdoor activity satisfaction in south korea study on the development of an infectious disease-specific health literacy scale in the chinese population the impact of organizational climate on safety climate and individual behavior core dimensions of the construction safety climate for a standardized safety-climate measurement analytic thinking, rejection of coronavirus (covid-19) conspiracy theories, and compliance with mandated social-distancing: direct and indirect relationships in a nationally representative sample of adults in the united kingdom knowledge and beliefs towards universal safety precautions during the coronavirus disease (covid-19) pandemic among the indian public: a web-based cross-sectional survey coefficient alpha and the internal structure of tests introduction to psychometric theory applied multivariate data analysis: volume ii categorical and multivariate methods testing mediation and suppression effects of latent variables: bootstrapping with structural equation models knowledge, attitudes, and practices towards covid-19 among chinese residents during the rapid rise period of the covid-19 outbreak: a quick online cross-sectional survey adoption of personal protective measures by ordinary citizens during the covid-19 outbreak in japan social distancing 2.0 with privacy-preserving contact tracing to avoid a second wave of covid-19 the authors declare no conflict of interest. key: cord-024824-lor8tfe6 authors: asgary, ali; ozdemir, ali ihsan; özyürek, hale title: small and medium enterprises and global risks: evidence from manufacturing smes in turkey date: 2020-02-12 journal: int j disaster risk sci doi: 10.1007/s13753-020-00247-0 sha: doc_id: 24824 cord_uid: lor8tfe6 this study investigated how small and medium enterprises (smes) in a country perceive major global risks. the aim was to explore how country attributes and circumstances affect sme assessments of the likelihood, impacts, and rankings of global risks, and to find out if sme risk assessment and rankings differ from the global rankings. data were gathered using an online survey of manufacturing smes in turkey. the results show that global economic risks and geopolitical risks are of major concern for smes, and environmental risks are at the bottom of their ranking. among the economic risks, fiscal crises in key economies and high structural unemployment or underemployment were found to be the highest risks for the smes. failure of regional or global governance, failure of national governance, and interstate conflict with regional consequences were found to be among the top geopolitical risks for the smes. the smes considered the risk of large-scale cyber-attacks and massive incident of data fraud/theft to be relatively higher than other global technological risks. profound social instability and failure of urban planning were among the top societal risks for the smes. although the global environmental and disaster risks were ranked lowest on the list, man-made environmental damage and disasters and major natural hazard-induced disasters were ranked the highest among this group of risks. overall, the results show that smes at a country level, for example turkey, perceive global risks differently than the major global players. small and medium enterprises (smes) face many small and large internal and external risks. while they can better control much of the internal risks through risk management and treatment measures, they are more vulnerable to external risks because these risks are often beyond their control, influence, radar, and capacity to manage. the world economic forum (wef) has created, assessed, and monitored 30 global risks since 2005, using a survey of about 1000 major global stakeholders and players. by the wef definition, a global risk is ''an uncertain event or condition that, if it occurs, can cause significant negative impact for several countries or industries within the next 10 years'' (world economic forum 2019, p. 100). small and medium enterprises are playing a vital role in local, national, and global economies and are very important in job and income generation (chowdhury 2011; oecd 2014; chatterjee et al. 2015) . at least 90% of the firms in both developed and developing countries are smes (mbuyisa and leonard 2017) . they account for 40-60% of gdp in developed and developing countries (igwe et al. 2018 ) and generate about 40% of the global industrial production and 35% of the world's exports (sharma and bhagwat 2006; mbuyisa and leonard 2017) . small and medium enterprises are the backbone of the european economy, with more than 99.8% of all non-financial businesses, 58% of total value added, and 66.8% of total employment (briozzo and cardone-riportella 2012; european commission 2015) . in japan, more than 99.7% of all firms are smes, they employ more than 70% of the workforce, and create more than 50% of all added value of the manufacturing industry (yoshino and taghizadeh-hesary 2018) . small and medium enterprises comprised 99.8% of the firms in turkey in 2014 and were involved in 55.1% of export and 37.7% of import (kaya and uzay 2017) . considering their size and roles in the national and global economies and the fact that the enhancement of the private sector's resilience depends on risk reduction by smes (chatterjee et al. 2015) , more studies are needed to better understand various aspects of sme risk management. small and medium enterprises, like large corporations, face a significant number of risks, and their survival and resilience are important for national and global economies. however, smes are less prepared to manage the risks, and the institutional supports for them are rather weak (han and nigg 2011) . small and medium enterprises around the world, particularly in developing and emerging economies do not have strong risk management, business continuity, and crisis management cultures and systems in place (asgary et al. 2013; yuwen et al. 2016; kaya and uzay 2017) . most of smes not have the resources and expertise to focus on these activities and therefore are more vulnerable to internal and external risks and disruptive shocks (leopoulos et al. 2006; marks and thomalla 2017) . to minimize the impacts, it is important that smes become more aware of global risks, as well as assess, monitor, and enhance their risk management and business continuity management capacities (güneş and teker 2010; brustbauer 2016; kaya and uzay 2017) . the goals of this study were twofold: (1) to examine whether country attributes and circumstances affect sme assessments of the likelihood, impacts, and rankings of global risks; and (2) to find out if sme risk assessment and rankings differ from global rankings. small and medium enterprises in manufacturing in an emerging economy with global footprints were selected because, unlike the wef that takes its samples from large international players, the sample smes are small individual players in the global economy and it is important to see how they view the global risks. the 2019 global risk report by the world economic forum (wef 2019) examines 30 important global risks that are classified into five categories: economic, environmental, geopolitical, societal, and technological (table 1) . these risks are evaluated annually based on 1000 global players and stakeholder views of the risks. according to the 2019 wef global risk report, extreme weather events, failure of climate-change mitigation and adaptation, natural disasters, data fraud or theft, cyber-attacks, man-made environmental damages and disasters, large-scale involuntary migration, biodiversity loss and ecosystem collapse, water crises, and asset bubbles in a major economy were ranked the top 10 global risks in terms of likelihood. weapons of mass destruction, failure of climate-change mitigation and adaptation, extreme weather events, water crises, natural disasters, biodiversity loss and ecosystem collapse, cyber-attacks, critical information infrastructure breakdown, man-made global economic risks have significant implications for smes, particularly those in the manufacturing sector. asset bubbles in a major economy can increase the production costs through inflation, wage increases and labor shortages, and access to financial resources that will impact the global economy (zheng et al. 2010) . global financial crises cause substantial downturn in the formation of new smes, their performance, and their existence in the market. the 1997-1998 world financial and economic crisis severely impacted smes. as interest rates started to rise, many smes were bankrupted due to the credit crunch, tight monetary policies, and decline in domestic and international demands (filardo 2011; wehinger 2014) . the number of bankrupted smes in south korea, for example, particularly in the manufacturing sector, increased by nearly 100% from 1996 to 1998 (gregory et al. 2002) . the 2008 economic crisis induced severe socioeconomic impacts worldwide and impacted smes in almost every economy, far beyond expectations, through fast domino effects that caused massive sme closures, downsizing, and reduced the number of new ventures (chowdhury 2011; sannajust 2014). small and medium enterprises were under extreme pressures and experienced devastating decrease in demand and revenues, increased lay-offs, and stressful working environments (kossyva et al. 2014 ). close to 50% of the smes in belgium and the netherlands, for example, experienced extended delays in their receivables (kossyva et al. 2014) . small and medium enterprises in the united states lost 2.8 million jobs (gagliardi et al. 2013) . during this global turmoil, turkish smes were also impacted heavily (karadag 2016) . during an economic crisis, smes are more vulnerable because of weak cash flow and financial structures, low equity reserves, limited adaptation potential and flexibility for downsizing, liquidation problems, too much dependency on external financial resources, tightened credit lines, payment delays on receivables, lack of resources, and lack of necessary skills to adopt or make necessary strategic decisions (ates et al. 2013; sannajust 2014; wehinger 2014; karadag 2016) . failure of aging and insecure energy, transportation, and communications infrastructure can have major short-and long-term risks for sme performance and competitiveness. high structural unemployment lowers demand for goods and services and impacts smes significantly (alegre and chiva 2013) . illicit trade reduces sme competitiveness in the global market. in countries with higher levels of economic risk, smes have less of a chance to flourish (mekinc et al. 2013) . energy is an important input for sme production and logistics. if energy prices are not manageable or controlable, smes face major uncertainties about energy costs and availability (mulhall and bryson 2014) . energy price shocks raise sme production costs (kilian 2008 ) and compromise their individual and collective competitiveness in the global economy. it is mainly because smes are usually less flexible with respect to their energy sources and smes in the manufacturing sector are very energy intensive, that unpredicted fluctuations in energy prices impact them extensively. energy price shock events have become more frequent and a consistent feature of the energy markets in recent years (mulhall and bryson 2014) . as the global demand for energy increases, more shock events in the energy prices are expected. finally, unmanageable high inflation rates at national and global levels pose risks to smes through higher interest rates (cefis and marsili 2006; gül et al. 2010 ). small and medium enterprises around the globe, particularly those that are part of the global supply chains, are exposed to various types of global environmental and disaster risks that can have devastating impacts on smes (auzzir et al. 2018 ). these enterprises are highly vulnerable to and not well prepared for most of the global environmental and disaster risks (crichton 2006; schaefer et al. 2011) . they are vulnerable to environmental disaster risks on four fronts: capital, labor, logistics, and markets (ballesteros and sonny 2015) . environmental and disaster risk events can damage and disrupt the supply chain networks in which many smes are embedded. they can also damage sme assets, premises, and inventories, disrupt their operations, increase their production costs, and reduce their revenues and long-term growth potentials (snyder and shen 2006; griffiths 2010, 2012; asgary et al. 2012) . small and medium enterprises have limited capabilities to recover from these events and bring their operations, revenue, and profit back to pre-event conditions (asgary et al. 2013) . considering the links that exist between climate change and extreme events, it is expected that these events will increase in the future (ipcc 2013). small and medium enterprises face significant climate change-related environmental and regulatory risks (schaefer et al. 2011) . major costly floods, severe heat and cold waves, heavy rains and extreme storms with higher frequency and intensity are observed globally. extreme events not only cause disruptions and destruction to smes, but also create major challenges for their continuity of operations and future planning (gunawansa and kua 2014; gasbarro et al. 2018 studies show that overall about 25% of smes do not reopen following a major disaster (ballesteros and sonny 2015) . of the us companies that experience disasters, for example, 43% never reopen, and another 29% close within 2 years (weinhofer and busch 2013; ballesteros and sonny 2015) . small and medium enterprises are worse off after disaster events compared to before disaster because they are relatively resource constrained, less resilient, are mainly informal and some of them do not fully comply or are not requested to follow standards and codes, lack necessary insurance, do not carry out risk assessments, and are often without business continuity plans (ye and abe 2012; undp 2013; ballesteros and sonny 2015; halkos et al. 2018) . being prone to multiple natural hazards such as flooding, earthquakes, and drought, natural hazards and disasters have affected smes in turkey as well. the 1999 earthquake had significant economic impacts on the enterprise sector, ranging from usd 1.1 to 4.5 billion in damages (oecd 2000) , most of it from the loss in manufacturing (usd 600 to 700 million). about 63.2% of the total manufacturing industry were damaged in five provinces, and 31,000 smes suffered heavy physical damages. ezgi (2014) reported that the vast majority of smes had little preparedness before the earthquake and only 30% of them invested in insurance before the earthquake. a world of geopolitical instability and uncertainty is a major concern for all sectors and businesses, but more so for smes. many of these risks are cross border with global consequences. while existing international political and economic agreements such as those of the world trade organization (wto) are weakened by unilateralism, there is little evidence that new and better multilateralism agreements are replacing them (pascual-ramsay 2015; asgary and ozdemir 2019). rather these agreements are being replaced by fragmentation, bilateralism, regionalism, as well as local and short-term interests (pascual-ramsay 2015; asgary and ozdemir 2019). the international economy and its key players, including smes, are becoming more exposed and vulnerable to existing and emerging geopolitical risks and uncertainties (pascual-ramsay 2015) . studies show that terrorist attacks, for example, even though they are very small in terms of direct physical impact zones, have economic impacts that are often substantial and very extensive. repeated terrorist attacks in one country not only impact the economy of that country but create spillover impacts for neighboring countries and the global economy. terrorist attacks discourage foreign investments and capital inflows and cause significant loss of economic activities and international trade (abadie and gardeazabal 2008; araz-takay et al. 2009 ). these risks can also increase insurance, transaction, transportation, and security costs for smes. turkey as an emerging economy located in a geopolitically complex region (middle east and north africa), with several potentially failing neighboring states, and as a member of various types of regional agreements, has a unique situation in terms of geopolitical risks. turkey has been suffering from terrorism and dealing with regional conflicts, both of which have had various impacts on the smes. the presence of terrorist activities has impacted the emergence and growth of smes and the overall economic performance in the country. bilgel and karahasan (2017) found that after the rise of terrorism, the per capita real gdp in eastern and southeastern anatolia declined by about 6.6%. other studies also found that terrorism has a major negative impact on foreign direct investments in turkey (omay et al. 2013 ). global societal risks have specific implications for smes. failure of urban planning leads to declining cities, informal urban growth or sprawl, and poor and fragile infrastructure with significant social, environmental, and health issues (asgary and ozdemir 2019). such urban environments are not able to adequately support enterpreneurship activities that can compete at national and global levels. cities without efficient and interconnected transportation systems, with significant air pollution, and unaffordable land and housing prices are not attractive for entrepreneurship growth (tursab and tuader 2017). but sme engagement in risk management and critical infrastructure protection is an effective way to reduce the impact of future disasters in urban areas (chatterjee et al. 2015; chatterjee et al. 2016) . food and water crises are other important global risks that can affect smes in several ways, particularly those in the agri-food business and those that are in water-intensive manufacturing sectors. social instability as another global risk is not healthy for sme growth and competitiveness. global pandemics such as the 2003 severe acute respiratory syndrome (sars) pandemic and the 2009 h1n1 pandemic can have immediate direct and indirect impacts on smes. for example, sars had major impacts on smes, particularly those in the tourism and hospitality sector in heavily impacted countries such as china, canada, thailand, and hong kong (kuo et al. 2008) . studies have found that many smes do not recognize pandemics as a meaningful risk. although governments have tried to raise awareness and provide resources to enhance pandemic preparedness by smes, awareness or concern and actual preparedness have not changed much, and most smes do not have appropriate preparedness and continuity plans for future pandemics (watkins et al. 2008) . armed conflicts, interstate wars, natural hazards and disasters, and climate change are creating widespread involuntary and forced displacement around the globe. population displacements have a range of economic, social, and political impacts on both source and host countries (tumen 2016; salgado-gálvez 2018) . the impacts of forced migration on smes have not been studied yet, but it may have both positive and negative impacts. at least smes can be considered a solution for some of these problems by providing job opportunities for displaced people. turkey has received more than 4 million displaced people from syria since the start of conflict in 2012 (onur 2018). adverse consequences of technological advances could be very diverse and consequential for smes, especially those in the manufacturing sector. new technologies such as robotics, autonomous vehicles and drones, automation, smart phones, artificial intelligence, 3-d printing, cloud computing and big data, and new materials are among the new technologies that can have unintended consequences and risks for manufacturing smes. these technologies have the potential to reduce outsourcing. studies predict that 47% of the jobs in the united states (much of them in smes) are at high risk of being automated over the next 20 years, especially in manufacturing, logistics, and administrative support (pascual-ramsay 2015). these advances will possibly reduce employment opportunities for workers in manufacturing smes and will challenge smes survival. while information technology brings significant growth opportunities for smes through knowledge and information availability, business communication, cost savings and efficiency, improving decision making, responsiveness, and overall flexibility (mbuyisa and leonard 2017) , technology also introduces risks, including data theft, disruptions, and cyber-attacks (chacko and harris 2006) . like other institutions, smes are dependent on internet and information technology and a substantial number of their sales and orders are handled through cyberspace and networks. any major failure and disruption of the national and global information infrastructure and networks due to large-scale disaster events can have significant negative impacts on smes. such disruptions can have severe consequences for smes that are very vulnerable and without adequate protection. small and medium enterprises use these technologies in production and service delivery, distribution, sales, and marketing. data breaches, cyber security, and intentional or accidental technological failures can disrupt or significantly damage the short-and long-term operation as well as the existence of smes. following the wef (2019), this study uses a qualitative risk assessment (qra) approach. this will allow us to compare the results of the study with the global risk report results. qualitative risk assessment is one of the most widely used risk assessment approaches because of its low cost and ease of use and it is quick to perform (modarres 2006) . in qra, potential likelihoods and consequences are assessed using qualitative scales such as low, medium, and high. qualitative risk assessment uses subjective likelihood and consequence values collected from experts and decision makers and, as such, they are not always perfect estimates and are subject to biases and heuristics (talbot 2011) . assessed likelihoods and consequences for selected risks are then ploted in a two-dimensional space to generate a risk matrix. various risk matrix forms and sizes have been reported in risk assessment reports. a risk matrix is used to visualize, compare, and rank different risks based on their locations in the matrix. color coding is mostly used to show the importance of each risk. the risk matrix approach is also used for indicating possible risk control measures and to record the inherent, current, and target levels of risk (hopkin 2012) . a risk matrix provides some basis for risk treatments and management. risks that are located in the top righthand corner of the risk matrix (often colored in red) have higher likelihoods and impacts. these risks are very critical and need to be controled. risks that are in the lower (colored in green) and middle part (colored in orange or yellow) of the matrix should be monitored and checked regularly. although the risk matrix method has been criticized by scholars and professionals (cox 2008; ni et al. 2010; bao et al. 2017) , it is an invaluable tool for fast, effective, and practical risk assessment (talbot 2011) . data were collected from a sample of manufacturing smes in turkey. small and medium enterprises in turkey are categorized into three groups of micro, small, and medium-sized enterprises based on their employee numbers and annual revenues. micro firms are those with less than 10 employees and less than usd 430,000 annual turnover. small firms are those with less than 50 employees and less than usd 3.4 million annual turnover, and medium-sized firms are those with less than 250 employees and less than usd 17.2 million annual turnover (karadag 2016) . to assess and evaluate the risks, a questionnaire survey, including 19 questions, was developed. several questions collected general information about the production type, years in operation, city of operation, position of responder in the business, percent of production for export, percent of imported production materials, and export countries. in two sets of questions sme representatives provided their opinion about the consequences and likelihoods of global risks. samples of a risk likelihood question and a risk consequences question are: 6. review the following global economic risks and give your opinion on the likelihood of these risks occurring in the manufacturing sector in turkey over the next 10 years. • critical infrastructure failure: • very unlikely • unlikely • somewhat likely • likely • very likely 7. please review the following global economic risks and give your opinion about the potential impacts/consequences of these risks on the manufacturing sector in turkey over the next 10 years. • critical infrastructure failure: the questionnaire was designed and distributed using google form. small and medium enterprises operating in the manufacturing sector (nace revision.02 in c class through 10-33) were included in the population framework. these are smes in the nace classes that are registered with the kosgeb (small and medium industry development organization, turkey) and had an approved kobi̇ (sme) certificate in 2017. the survey link was emailed to about 40,000 smes on 19 april 2019. potential respondents were asked to complete the online survey by 3 may 2019. by the deadline, 217 completed responses had been received. the sample covers smes in different manufacturing areas. after the unspecified ''other'' manufacturing subgroup (39), smes in food products (22), textiles (22), machinery and equipment (22), furniture (20), fabricated metal (13), basic metal (9), wood products (9), rubber and plastic (9), electrical equipment (8), and chemical products (8) had the highest number of participants in this study. the questionnaire was completed by various individuals within each sample business, including managers (36), owners (20), accounting managers (13), financial managers (9), business partners (10), board members (3), engineers (6), and other employees (9). sample smes are operating in 50 different cities and 7 geographic regions in turkey, including marmara (72), central anotolia (47), aegean (29), black sea (24), mediterranean (23), eastern anotolia (11), and southeastern anotolia (11). the majority of the sample businesses (132) have been in operation for less than 10 years, only 34 have been in operation for 11 to 20 years, 29 between 21 and 30 years, and the rest (22) have been in business for more than 31 years. about 39.6% of the sample businesses were micro businesses, 37.8% small businesses, and about 22.6% were medium-sized enterprises. more than 60% of the smes export their products to varying degrees. they export to a large list of neighboring and european countries in particular. the sample smes also import some of their raw materials and equipment, and about 85% use imported products in their productions. using the methodology on collected data respondents perceived likelihood of the global risks and their impacts were identified and risk values were calculated, and risk matrix was generated using the risk values. this section presents the key findings. almost all global economic risks are perceived to have very high and high likelihoods by the sample turkish smes (fig. 1a) . however, fiscal crises in key economies, high structural unemployment or underemployment, and severe energy price shock are among the most likely risks according to the sample enterprises. a majority of the smes thought that catastrophic and severe impacts can be expected from the economic risks, particularly unmanageable inflation, high structural unemployment or underemployment, and fiscal crises in key economies (fig. 1b) . global environmental risks seem to have relatively lower likelihoods to the sample smes, compared with the global economic risks (fig. 2a) . man-made environmental damages caused by human and major natural hazards and disasters show higher perceived likelihood. the perceived impacts from these risks were scored lower as well. among these risks, environmental damages caused by human are perceived to have slightly higher impacts for the sample businesses (fig. 2b) . figure 3a and b show the sample sme respondents' opinion about the likelihoods and the consequences of the global geopolitical risks. failure of national governance and failure of regional or global governance, and largescale terrorist attacks have the highest average perceived likelihood in this risk category. however, the impacts are assessed to be higher for interstate conflicts with regional consequences, followed by the failure of national governance, and failure of regional or global governance. among global societal risks failure of urban planning and profound social instability were perceived to have the highest likelihood and impact averages among the sample businesses (fig. 4a, b) figure 5a and b present the stated likelihoods and consequences of the global technological risks. while the likelihood of all these risks is perceived to be high, largescale cyber-attacks and large data fraud are among the top in this risk group. although the means of the impacts are lower for most of these risks, except for the negative consequences of technological developments, more smes stated that the consequences of large-scale cyber-attacks and large-scale data fraud are expected to be severe and catastrophic. risks can be calculated as the multiplication of likelihood by impacts (table 2) . using the mean values of each risk category, economic and technological risks are perceived to have the highest likelihood levels followed by geopolitical risks (fig. 6a) . in terms of impacts, however, economic risks and geopolitical risks take the first and second ranks followed by technological risks. societal and environmental risks are considered to have lower impacts (fig. 6b) . in terms of the overall risk, the results show that economic risks and geopolitical risks take the first and second place, followed by technological risks (fig. 6c) . using the qualitative risk analysis methodology and perceived likelihood and impact data, a risk matrix was generated. although the horizontal and vertical axes take the values 1 to 5, the matrix axes have been rescaled for better visualization. this risk matrix displays the means of stated likelihoods and consequences for each risk. risks at the top right and in the red colored area are risks with higher than average likelihoods and consequences. risks in the lower left part of the matrix and colored green are considered to be low. figure 7 presents the resulting risk matrix for the 217 businesses and the 30 risks. most economic risks are in the upper part of the risk matrix, followed by some geopolitical risks. large-scale data fraud is the only technological risk that falls into the same area. majority of environmental and societal risks, although scattered diagonally in the risk matrix, are in the lower part of the matrix. this study examined the global risks from the perspective of manufacturing smes with global footprints in the emerging economy of turkey. the main aim was to understand whether and to what extent country-and industry-specific contexts and conditions affect smes perceptions of the global risks. key findings are discussed here. first, overall the results suggest that regardless of the ranking, the global risks are of high concern for smes in turkey. the average likelihood for all global risks is 3.56 and the average impact is 3.1. the minimum perceived likelihood (infectious desease) is 3.05 and the minimum perceived impact (severe weather events) is 2.7. these figures confirm that all global risks are of concern for the smes and have significant implications for them, particularly those in the manufacturing sector (zheng et al. 2010) . second, findings indicate that the smes' perceived risks at the country level (turkey) significantly varied from those perceived by the global companies in the global risk report (world economic forum 2019) (table 3) . while this study does not examine the underlying causes of these differences, it is evident that the smes' major concerns are global economic and geopolitical risks, both in terms of the likelihoods and the impacts. individual smes in turkey have been exposed and impacted more by the global economic risks than other risks. our findings are consistent with a few different but related research conducted by gül et al. (2010) , topçu (2013) , and deloitte (2017) that economic and financial risks such as devaluation of the turkish lira, interest rate risk, breakdown in cash flow or liquidity risk, credit risk, and increase in input prices were the key risks that businesses are facing in turkey. third, it is not surprising that smes' highest perceived risks are economic and geopolitical risks. studies demonstrate that financial and economic crises cause substantial (gregory et al. 2002; zheng et al. 2010; chowdhury 2011; filardo 2011; kossyva et al. 2014) . small and medium enterprises are very vulnerable to economic and financial crises as they are forced to close, downsize, and reduce the number of new ventures due to sharp decrease in demand and revenues (ates et al. 2013; sannajust 2014; wehinger 2014) . in today's global economy, turkish smes are not exempt from this, and they have been frequently impacted by such risks in the past two decades as well (karadag 2016) . moreover, giving high likelihood and high impact values for financial crises and other economic risks can be explained by the fact that turkish economy has deficit in international trade (abbasoglu et al. 2019 ) and highly rely on external energy sources such as oil and natural gas. fourth, failure of regional or global governance and failure of national governance are the geopolitical risks that are among the top perceived risks by the smes in this study. these risks have been largely felt by turkish smes in recent years. turkey has been in close proximity to a number of regional conflicts with potential impacts on the smes (omay et al. 2013; bilgel and karahasan 2017) and because of their vulnerability (pascual-ramsay 2015) and awareness of these risks, such risks are perceived highly both in terms of the likelihoods and the impacts. fifth, the sample smes also consider the likelihood of large data fraud/theft and large-scale cyber-attacks to be high. this is possibly due to the increasing dependency of the smes to the internet and the increasing number of cyber attacks and data theft in recent years (mbuyisa and leonard 2017) . while smes do not consider the impacts of these risks as high as their likelihoods, still these risks can cause disruptions and severe consequences to them, particularly because they are not well equipped to manage these risks. a recent report published by allianz (2019) confirms that smes in turkey increasingly recognize their cyber vulnerability and risks. finally, the relatively lower perceived likelihoods and impacts of the global risks by the sample smes can be attributed to the fact that small businesses may not be directly and highly impacted by distance environmental risks such as major natural hazard-induced disasters and that the awareness about some of the environmental risks among the smes may be lower than other risks. moreover, turkey has not experienced a major natural hazard-induced disaster in the past 20 years, and major weather events have been very local. the results of this study indicate the importance of addressing global risk assessments by smes. as more and more smes are connected with the national and global economies, their awareness about theses risks and the impacts that they could have for them will increase. this awareness can help smes to take these risks into consideration and prepare themselves for such risks. this study highlighted that smes' perceptions of the global risks are different from the businesses that operate at large scale at the global level. it also demonstrated that country's circumstances can affect smes' assessments of the likelihood, impacts, and rankings of global risks. it demonstrated that smes are more concerned about economic risks and risks that directly impact economic systems and variables, particularly geopolitical risks. environmental risks, while important, are not at the top of the list for smes. considering the significant role that smes play in local and national economies and the fact that they are concerned most about global economic and geopolitical risks, it can be argued that efforts towards lowering global economic and geopolitical risks can significantly benefit smes. since turkey's smes have been in a relatively unique situation in the past two decades with respect to some of the major global risks, similar studies in countries in other parts of the world may shed more light on how country contexts and type and size of businesses impact smes' perceptions of global risks. it was beyond the scope of this study to examine the smes' risk and business continuity actions taken to manage and mitigate the risks. future studies can also investigate whether and how smes prepare themselves for global risks. open access this article is licensed under a creative commons attribution 4.0 international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. terrorism and the world economy the turkish current account deficit linking entrepreneurial orientation and firm performance: the role of organizational learning capability and innovation performance allianz risk baromter 2019 sme business risks the endogenous and non-linear relationship between terrorism and economic performance: turkish evidence global risks and tourism industry in turkey disaster recovery and business continuity after the 2010 flood in pakistan: case of small businesses measuring small businesses disaster resiliency: case of small businesses impacted by the 2010 flood in pakistan the development of sme managerial practice for effective performance management impacts of disaster to smes in malaysia building philippine smes resilience to natural disasters. pids discussion paper series comparison of different methods to design risk matrices from the perspective of applicability the economic costs of separatist terrorism in turkey evaluating the impact of public programs of financial aid to smes during times of crisis: the spanish experience enterprise risk management in smes: towards a structural model survivor: the role of innovation in firms' survival information and communication technology and small, medium, and micro enterprises in asia-pacific-size does matter bangkok to sendai and beyond: implications for disaster risk reduction in asia identifying priorities of asian small-and medium-scale enterprises for building disaster resilience impact of global crisis on small and medium enterprises what's wrong with risk matrices climate change and its effects on small businesses in the uk. london: axa insurance uk annual report on european smes the role of lifeline losses in business continuity in the case of adapazari the impact of the international financial crisis on asia and the pacific: highlighting monetary policy challenges from a negative asset price bubble perspective annual report on european smes 2012/2013: a recovery on the horizon? brussels: european commission sustainable institutional entrepreneurship in practice korean smes in the wake of the financial crisis: strategies, constraints, and performance in a global economy. australia: department of economics risk and uncertainity expectations in smes: the case of karaman a comparison of climate change mitigation and adaptation strategies for the construction industries of three coastal territories awareness of corporate risk management in turkish energy industry bouncing back from extreme weather events: some preliminary findings on resilience barriers facing small and medium-sized enterprises the influences of business and decision makers' characteristics on disaster preparedness-a study on the 1989 loma prieta earthquake fundamentals of risk management: understanding, evaluating and implementing factors affecting the investment climate, smes productivity and entrepreneurship in nigeria climate change 2013: the physical science basis. contribution of working group i to the fifth assessment report of the intergovernmental panel on climate change the role of smes and entrepreneurship on economic growth in emerging economies within the post-crisis era: an analysis from turkey the risks that will threaten going concern and control recommendations: case study on smes a comparison of the effects of exogenous oil supply shocks on output and inflation in the g7 countries co-opetition: a business strategy for sme in times of economic crisis. south-eastern assessing impacts of sars and avian flu on international tourism demand to asia rm for smes: tools to use and how beyond adaptation: resilience for business in light of climate change and weather extremes assessing organizational resilience to climate and weather extremes: complexities and methodological pathways responses to the 2011 floods in central thailand: perpetuating the vulnerability of small and medium enterprises? the role of ict use in smes towards poverty reduction: a systematic literature review the impact of corruption and organized crime on the development of sustainable tourism risk analysis in engineering: techniques, tools, and trends energy price risk and the sustainability of demand side supply chains some extensions on risk matrix approach economic effects of the 1999 turkish earthquakes: an interim report. oecd economics department working paper no 2014. financing smes and entrepreneurs 2014: an oecd scoreboard the effects of terrorist activities on foreign direct investment: nonlinear evidence from turkey turkey: a crossroads of risk and opportunities global risks and eu businesses estimating the lost economic production caused by internal displacement because of disasters impact of the world financial crisis to smes: the determinants of bank loan rejection in europe and usa smes' construction of climate change risks: the role of networks and values practice of information systems: evidence from select indian smes supply chain management under the threat of disruptions what's right with risk matrices? a great tool for risk managers. version 1, 31000 risk corporate risk management in businesses the economic impact of syrian refugees on host countries: quasi-experimental evidence from turkey association of turkish travel agencies and turkish association of tourism academicians small businesses: impact of disasters and building resilience: analysing the vulnerability of micro, small, and medium enterprises to natural hazards and their capacity to act as drivers of community recovery. background paper for the global assessment report on disaster risk reduction tackle the problem when it gets here: pandemic preparedness among small and medium businesses building up resilience of construction sector smes and their supply chains to extreme weather events smes and the credit crunch: current financing difficulties, policy measures and a review of literature corporate strategies for managing climate risks the global risks report 2019 the impacts of natural disasters on global supply chains. artnet working paper series the role of smes in asia and their difficulties in accessing finance. asian development bank institute controlling town industry explosion hazard in china towards a system of open cities in china: home prices, fdi flows and air quality in 35 major cities acknowledgements this research has been partially supported by the scientific and technological research council of turkey (tubitak) through the 2221-fellowship program for visiting scientists and scientists on sabbatical leave. the project has been also supported in part by york university's advanced disaster, emergency, and rapid response simulation (adersim) funded by ontario research fund. key: cord-033655-16hj7sev authors: miroudot, sébastien title: reshaping the policy debate on the implications of covid-19 for global supply chains date: 2020-10-12 journal: j int bus policy doi: 10.1057/s42214-020-00074-6 sha: doc_id: 33655 cord_uid: 16hj7sev disruptions in global supply chains in the context of the covid-19 pandemic have re-opened the debate on the vulnerabilities associated with production in complex international production networks. to build resilience in supply chains, several authors suggest making them shorter, more domestic, and more diversified. this paper argues that before redesigning global supply chains, one needs to identify the concrete issues faced by firms during the crisis and the policies that can solve them. it highlights that the solutions that have been proposed tend to be disconnected from the conclusions of the supply chain literature, where reshoring does not lead to resilience, and could further benefit from the insights of international business and global value chain scholars. lastly, the paper discusses the policies that can build resilience at the firm and global levels and the narrative that could replace the current one to reshape the debate on the policy implications of covid-19 for global supply chains. with covid-19, the debate has re-emerged on the vulnerabilities of an interconnected world where goods are produced in complex value chains that span across borders. international production and supply chains were criticized because of the economic disruptions they allegedly created when a pandemic interrupted trade and the movement of people across countries, adding to existing fears and concerns about globalization (kobrin, 2020) . reshaping global supply chains, and possibly making them shorter, more domestic, or more diversified, were therefore proposed to bring some resilience into production networks (coveri, cozza, nascia, & zanfei, 2020; javorcik, 2020; lin & lanng, 2020; o'leary, 2020; o'neil, 2020; shih, 2020) . this debate builds on several concepts used in supply chain risk management, starting with 'resilience' 1 . however, some of the solutions proposed, such as reshoring or diversifying production away from china, may be motivated by a different policy agenda than risk mitigation (evenett, 2020) . in this paper, i argue that before reshaping global supply chains, the debate itself needs to be reframed and more solidly grounded in business reality and in lessons from the literature. there is an important corpus of knowledge in the supply chain and risk management literature that tells firms what to do to improve the resilience of their own production networks. however, there are fewer answers on what resiliency means at the country or global level and what global value chain-oriented policies can be adopted to strengthen it. this is where the international business (ib) and global value chain (gvc) literature can provide further insights. for the ib community, covid-19 can be seen as an opportunity to bring to policy circles the knowledge on firms and the organization of multinational enterprises (mnes) that can help to shape the debate on the resilience of supply chains, in line with the ambition of the journal of international business policy (van assche, 2018; van assche & lundan, 2020). as noted by lorenzen, mudambi and schotter (2020) , studying mne risk mitigation strategies in the context of covid-19 can be a fruitful avenue for ib research. strange (2020) already provides some interesting thoughts about how gvcs may be reorganized once the crisis is over. the concept of resilience is not new in the gvc literature. it was used for example to highlight the recovery of trade networks after the great financial crisis in 2008 (cattaneo, gereffi, & staritz, 2010) . more recently, gereffi (2020) addresses the issue of the resilience of medical supply gvcs. however, as policymakers now seem to associate resilience with a specific type of organization of gvcs where mnes produce mostly through more localized or shorter supply chains, new questions arise on the type of governance that would allow such organization and on the way policymakers could influence the design of gvcs. the main risk with the current debate on the economic policy implications of covid-19 is that it can lead to the use of supply chain concepts by policymakers and international organizations in a way that departs from business reality, thus leading to wrong policy choices. the idea that reshoring unambiguously improves the resilience of supply chains, for example, is not supported by academic research. if there is a case for linking reshoring to higher resiliency, it should be brought up based on evidence, on a deeper discussion of the specific circumstances where it might be a strategy mitigating risks, and one would also need to disentangle the different policy rationales (e.g., bringing jobs back home versus creating more resilient supply chains). what is at stake in this debate are three decades of productivity gains and innovation driven by the internationalization of production, as well as higher levels of income in many emerging economies (world bank, 2019). building more resilient supply chains should not lead to the dismantlement of gvcs. it should also not replace the risks related to covid-19 by new policy hazards and a higher level of uncertainty for companies. against this backdrop, the paper suggests that the debate on the policy implications of covid-19 for international supply chains can be improved in three ways. first, there is a need to better understand the 'vulnerabilities' of global supply chains during covid-19. that is, the primary step in reshaping the debate is to identify what went wrong during covid-19. second, one needs to compare the current policy proposals to established insights from the business literature. i illustrate this with a discussion of the effects of building redundancy in suppliers, just-in-case management, and domestic supply chains. the literature indicates that these strategies are not the best suited to boost resilience in gvcs. still, their analysis is useful to point to better and more realistic policy options and to see where ib research can help. lastly, as the literature suggests that it is at the firm level that resilience is built (or at the level of mnes or lead firms in gvcs), the question is what resilience means at the country or global level and what governments can do to strengthen it. answering these questions can set the stage for a new narrative on the policy implications of covid-19 for global supply chains. prescribing the solution unlike the great financial crisis of 2008-2009 that provoked a collapse of trade financing, covid-19 has prompted an economic crisis that is not specifically a trade crisis. the most affected industries are services that do not rely on long and complex value chains but involve movements of people (benz, gonzales, & mourougane, 2020) . as china was the first country to put a lockdown into effect in january, there was initially the fear that many manufacturing gvcs would be disrupted because key inputs from china would not be delivered. this immediately triggered a series of papers warning about the vulnerabilities of international supply chains and the risk of producing in china (braw, 2020; gertz, 2020; linton & vakil, 2020, among others) . 2 many manufacturing value chains indeed rely on inputs produced in china and calculations with international input-output tables suggest that not having access to chinese inputs can have a high economic impact on the rest of the world (baldwin & freeman, 2020) . however, there is a lack of evidence at this stage of how serious disruptions were related to the (partial) lockdown of the chinese economy. the reason is that large parts of the world also implemented lockdowns a few weeks later, and demand for most manufacturing goods started to fall at the same time. the chinese lockdown was also relatively short, and china was the first country to restart its economy. macro calculations can give an indication of how important a country is as a supplier of inputs for others. however, since companies have risk management strategies and inventories, the actual impact of china temporarily shutting down its exports is not known. figure 1 compares the projected fall in gdp in 2020 according to the latest oecd economic outlook (oecd, 2020a) with the import intensity of production in g20 economies (an indicator of the reliance on imported inputs all along the value chain). 3 there is no apparent correlation between the two. the country that is the most dependent on gvcs is korea, which happens to be the economy with the lowest projected fall in gdp for 2020. at the opposite side of fig. 1 are eu economies that are dependent on their regional supply chains (but not so much on china) and were severely hit by covid-19. the idea that dependence on china or some other country creates supply chain vulnerabilities and that covid-19 has somehow materialized this fear would need to be substantiated by strong quantitative evidence, and this evidence would have to point to the size of economic losses rather than just the existence of disruptions. 4 when analyzing the evidence, it is also important to propose some counterfactuals. using a quantitative model of production and trade, bonadio, huo, levchenko and pandalai-nayar (2020) find, for example, that there is a high drop in world gdp due to the transmission of the covid-19 shock through gvcs. however, the drop in gdp is higher under the scenario of a 'renationalization' of global supply chains. also using a quantitative model, oecd (2020b) highlights that in addition to higher costs, the re-localization of supply chains also leads to higher volatility in output, as there are fewer channels for economic adjustments. therefore, the right question might not be whether there are vulnerabilities associated with international sourcing but whether these vulnerabilities are higher than if production were concentrated domestically. while the jury is still out on the actual vulnerabilities of gvcs during covid-19, there are nonetheless three types of concrete issues that have been highlighted and that deserve to be addressed from a policy perspective. the chinese supply shock at the end of january and beginning of february 2020 is an example of international supply chain risk. whether it is a pandemic or a natural disaster, production can suddenly halt in a region of the world and induce a contagion effect to other regions through international supply chains. this was observed in 2011 with the tōhoku earthquake and tsunami in japan and the chao phraya floods in thailand, or in 2005 with hurricane katrina in the united states. many companies have learned to deal with such country-specific shocks by reinforcing their risk management strategies to mitigate the impact on their production processes. if building more resilient supply chains simply means improving the capacity of firms to face country-or region-specific supply chain risks, there is already abundant literature that indicates how this can be done (christopher & peck, 2004; sheffi, 2005a; manuj & mentzer, 2008; pettit, fiksel, & croxton, 2010; kamalahmadi & parast, 2016) . from the experience of firms at the beginning of 2020, it might be possible to revisit this literature and to draw new conclusions from additional case studies, as the field is always evolving (pettit, croxton, & fiksel, 2019) . such case studies would be particularly useful to give insights on what exactly went wrong with global supply chains beyond the supply and demand shocks that have affected all firms during covid-19. the second type of disruption that has triggered the debate on gvcs is those related to the production of medical supplies and more particularly personal protective equipment (ppe). the shortage in face masks, a key product to fight the coronavirus, was also quickly turned into an international supply chain issue. however, the story behind face mask shortages was an exceptional surge in demand (oecd, 2020c; gereffi, 2020) . the country that was concentrating half of the production of face masks in the world (china) also faced a shortage, suggesting that domestic production or not, the way to deal with a surge in demand is not by asking where production takes place but how production capacity can be rapidly ramped up. the shortage was also exacerbated by export restrictions put in place by some countries and by the fierce competition between governments to get access to existing stocks of face masks (fiorini, hoekman, & yildirim, 2020) , highlighting that the problem is not limited to the organization of supply chains. china and gvcs provided the solution to the shortage with massive exports from china to the rest of the world in april-may and one can wonder whether what seemed to be the issue (international sourcing from china) was not retrospectively the solution. if building more resilient supply chains means preventing future shortages in essential products, the answer might lie in a discussion of stockpiling strategies, contingency plans and public-private partnerships, as well as addressing export restrictions put in place by governments (oecd, 2020c). like companies, governments need to assess risks and have risk management strategies that include plans for the production of essential goods (dasaklis, pappis, & rachaniotis, 2012). a closer look at supply chains from an ib perspective could nonetheless bring additional insight on how companies themselves can be prepared for a surge in demand and more generally the volatility in demand (as the surge in demand is followed by a fall when the crisis is finished, leaving many companies with excess production capacity). the supply chain risk and the issue of volatility in demand are not new and may not require the 'world after' to be radically different from the world of yesterday. although not necessarily making the headlines when they are unrelated to major natural disasters or a pandemic, disruptions in value chains are frequent (logistics incidents, fire in a warehouse, bankruptcy of a supplier, etc.). not all companies are well prepared to face risks (mckinsey global institute, 2020), but some are (sheffi, 2015) , and building on advances in supply chain and risk management, companies should come up with sensible answers to the question of resilience of their supply chains. new advances in big data analytics and the internet of things (iot) are also likely to provide new answers (birkel & hartmann, 2020) . the third type of disruption, and maybe the most prevalent during covid-19, is those related to the functioning of international trade networks. trade did not come to a halt at the height of the crisis but it was definitely more complicated (and more costly) for firms to export and to import because of tensions in transport services and issues with border controls (oecd, 2020d; wto, 2020). with travel bans, the supply of air cargo services was reduced, as half of air cargo shipments are on passenger flights. longer delays at the border were observed for customs procedures due to new health regulations and tighter controls, also affecting maritime and land transport. new port procedures and rules on the disembarkation of crews were also responsible for reduced capacity in the shipping industry. while transport companies are the ones that can mitigate the impact of such disruptions, it should be noted that they are the consequence of measures put in place by governments and that making border processes faster and safer is the basis of trade facilitation policies. addressing these types of issues does not require a reorganization of gvcs. once the problems faced by firms are clearly identified, it becomes easier to move to an evidence-based policy discussion and to see what type of policies or cooperation across countries could actually bring answers to supply chain risks or volatility in demand. and domestic supply in addition to not having properly identified the issues to be solved, the debate on covid-19 and global supply chains has started with strong statements about solutions, relying on concepts from the business literature. these concepts tend to be used without actually referring to academic work, which can be explained by the fact that what the literature has to say is quite different from the recommendations made. this can be illustrated with three concepts used in the papers telling us about the new normal of supply chains in the post-covid-19 world: redundancy, just-in-case inventories, and domestic supply chains. a new word that has appeared in the supply chain vocabulary of policymakers is 'redundancy'. in order to build more robust value chains, there should be some redundancy in suppliers (or supplier diversification) 5 , so that in case of failure of one, others can step in and provide the required inputs. redundancy is part of the toolkit of risk management strategies and can be applied not only to suppliers but also to inventories or production capacity (kamalahmadi & parast, 2016) . however, it is generally regarded as a costly solution to mitigate risk. as summarized by yossi sheffi, one of the leading experts in organizational resilience: ''companies can develop resilience in three main ways: increasing redundancy, building flexibility, and changing the corporate culture. the first has limited utility; the others are essential.'' (sheffi, 2005b) . in an empirical study looking at 4000 us firms, jain, girotra and netessine (2016) found that supply chains with more diversified sourcing (i.e., the same products are sourced from different suppliers) have slower recovery after a disruption than supply chains relying on single sourcing. one of the reasons for this is that single sourcing is associated with long-term relationships with suppliers. these long-term relationships ensure faster recovery because suppliers are more committed to mitigating risks, are ready to go beyond their contractual obligations to address disruptions, and are more integrated in the production processes of the firm with more information sharing. redundancy also means having some extra inventory or additional production capacity to face crises. however, the cost of holding a large inventory or maintaining spare production capacity often outweighs the gains from mitigating risks, particularly in the case of low-probability events. for companies that regularly face hurricanes or adverse climate conditions, for example, redundancy can make sense (sheffi, 2015) , but one cannot expect companies to invest in extra production capacity and inventories for a once-ina-century pandemic. the issue of redundancy is clearly one where ib research can help to shape the policy debate. the multinational enterprise has been analyzed as a network of subsidiaries operating in different countries with the objective of managing risks, such as exchange rate volatility or policy risk (kogut & kulatilaka, 1994) . even if there are switching costs, covid-19 has illustrated that some companies used their network to reallocate production, such as samsung moving temporarily the production of its high-end mobile phones from korea to vietnam when its factory was threatened by the coronavirus (financial times, 2020). this type of redundancy is more related to flexibility and does not imply duplicating capacity or multiplying the number of suppliers on a permanent basis. however, it is one advantage that mnes have over companies operating in a single market. explaining what type of redundancy is useful to build resilience in gvcs and how this redundancy is related to international production could improve the terms of the debate. the discussion on inventories is related to 'just-intime' strategies that have contributed to reducing the size of buffer stocks. just-in-time (jit) inventories management was introduced in the 1970s by toyota and was quickly adopted by many manufacturing companies in the world as an effective strategy to reduce costs, shorten lead times, and improve the quality of production (keller & kazazi, 1993) . jit is part of lean manufacturing strategies aimed at reducing all costs and waste in the production process (bhamu & singh sangwan, 2014) . now the idea would be to switch to 'justin-case' management where a loss in economic efficiency would be traded-off for increased security in supply of inputs. but what exactly is the underlying management strategy? 'just-in-case' is an expression used in the literature to describe what was before jit or in the risk management literature to discuss whether higher inventories are needed (srinidhi & tayi, 2004) . but it is not a specific type of management model that could be mainstreamed to make supply chains more resilient, unless the idea is to come back to the management of inventories as it was before the ict revolution and modern logistics. 'just in case' is a very vague proposal that maybe only suggests adjusting jit to better take into account risk management. however, this is already the case, as risk management strategies and jit generally go together. firms that invest in reducing inventories and making their production process as efficient as possible all along the value chain are also the ones investing in the monitoring and management of risks. this can be illustrated with cisco's supply chain risk management that is often mentioned as an example of best practices. cisco aims at identifying the right level of inventories to achieve both resilience and efficiency (miklovic & witty, 2010) . in may, in the middle of the covid-19 crisis, 3m -one of the main manufacturers of face masks -announced that it plans to reduce the cost of its inventories by usd 500 million in the coming years in order to operate its supply chains more efficiently (supply management, 2020) . this highlights that companies producing essential goods are also looking for lean inventories and do not see it as a contradiction with their risk management and business continuity objectives. a point made by pisch (2020) is also that jit companies have lower costs for inventories. therefore, if there is a need to increase inventories to reduce risks, they are also better placed to do it in a more competitive way. a related consideration is that when there is a fall in demand, like in the current covid-19 crisis, companies with low inventories have smaller losses than those with high inventories. the paradox is that if 'just-incase' was currently the predominant strategy of firms, more of them could become bankrupt with covid-19. finally, it should be noted that the manufacturing paradigm has also recently shifted from 'lean manufacturing' to 'agile manufacturing' (potdar, routroy, & behera, 2017) . while some firms may still follow jit and lean production, new business models put more emphasis on the capacity of firms to adapt to change and to produce in uncertain environments. some ib scholars suggest that the international business environment is now characterized by volatility, uncertainty, complexity, and ambiguity (vuca) and that, in this new vuca world, firms need to develop dynamic capabilities to remain competitive (bennett & lemoine, 2014; teece, 2014; van tulder, verbeke, & jankowska, 2019) . what authors reacting to covid-19 are calling for is already something under way (and where efficiency does not have to be sacrificed to achieve resilience). further insights on the new paradigms of firms and what they do and intend to do as a consequence of the covid-19 pandemic could also help to bring the policy debate closer to the decisions of firms. the idea that domestic production is more resilient than international production is also not something found in the risk management literature. 6 the main reason for this is that there are many risks in the domestic economy as well and this literature does not try to identify where is the safest place to produce but what strategies can companies put in place to mitigate risks. for example, companies producing in japan will always face a high risk of earthquakes. some countries may be less exposed to natural disasters but will face other risks such as exchange rate volatility, strikes, social unrest, or a pandemic. there are also risks that are not related to the location of production, such as bankruptcy. if a supplier goes bankrupt (which is a high risk during the covid-19 recession), it does not matter whether it produces in the domestic economy or not. inputs will no longer be supplied. another type of risk that has recently gained attention is cyber risk (ghadge, weiß, caldwell, & wilding, 2019) . as supply chains increasingly rely on information and communication technologies, they are more vulnerable than before to cyber-attacks and it failures, a risk that is not lower when production is domestic (and potentially higher if domestic firms all use the same it infrastructure). risk management is about looking at the whole portfolio of risks, which can lead to different decisions in terms of the location of production. domestic production might indeed be the strategy in some cases, but it would be the result of a decision integrating a variety of risks -and risk is only one determinant of the location of production among others. until recently, the concept of reshoring could not be found in the business literature and was mentioned more as a hypothetical case when discussing offshoring. some anecdotal evidence on companies actually reshoring their activities has prompted new research. however, the literature does not regard supply chain risk as one of the main determinants of reshoring (wiesmann, snoei, hilletofth, & eriksson, 2017) . minimizing disruptions in supply chains and reducing delivery times might be a driver but studies generally emphasize the limits of reshoring (bailey & de propris, 2014) . some further insights that could be brought from the ib literature are what firms have to lose (or to change) when being disconnected from the most efficient suppliers or from international knowledge networks. there are advantages in producing locally, such as not supporting all the additional costs related to cross-border transactions and managing activities abroad. the question is what kind of location advantages from offshored places are traded for these domestic location advantages. more generally, there are important insights from the ib literature that can help policymakers to develop a better understanding of the relationship between the organization of global supply chains and risk. risk is part of the location advantages in dunning's eclectic theory (dunning, 1980) . the policy or institutional risk in the host country of mnes has always been regarded as an important determinant of fdi, with heterogeneous responses (buckley, chen, clegg, & voss, 2018) . volatility in real exchange rates is another risk specific to international production that can lead firms to look for options and the flexibility of switching production across countries (kogut & kulatilaka, 1994) . real options theory provides a theoretical basis for this type of diversification strategy and can be applied to a variety of other risks beyond exchange rates (chi, trigeorgis, & tsekrekos 2019) . internalization theory can also potentially address the issue of risks in supply chains with an answer not limited to the geographical location of activities but also the boundaries of firms and what they decide to outsource or not (strange, 2020) . the geography of mnes is the result of complex strategic decisions (mudambi et al., 2018) . a new imperative related to the mitigation of supply chain risks can affect these decisions and change the geography and boundaries of firms, but the idea of reshoring seems simplistic as compared to the sophisticated location decisions described in the ib literature and the constraints faced by firms to remain competitive. help to know what to do now that it seems accepted that covid-19 has revealed the vulnerabilities of international supply chains, governments are under pressure to show that they are taking some action to fix gvcs. this is why it is dangerous to leave an analytical vacuum where the solutions proposed would only be the ones analyzed in the previous section. reshaping the debate and introducing a different set of answers derived from the business literature and supported by empirical evidence requires addressing three questions. one is a matter of communication, and the two others are more fundamental. while there is some convergence in the risk management literature on what can improve the resilience of supply chains, the first question is how to communicate the results of this research to policymakers. one issue is the diversity in the concepts used to describe what firms need to achieve. under the list of 'capabilities' to be developed are the concepts of flexibility, agility, visibility, adaptability, and collaboration (kamalahmadi & parast, 2016) . each concept casts light on a different aspect of what makes firms able to quickly react to a crisis and mitigate its impact, but there is also some overlap between them. as concepts, they also carry some level of abstraction and both businesses and policymakers might regard them as a bit disconnected from their daily work. it would be useful to simplify the message and to synthesize these different aspects. one reason why the concept of 'resilience' is successful (while not always properly used) is that it sounds like a reasonable and simple objective. as the pendulum seems to be right now more on the side of limiting international supply, increasing inventories and diversifying suppliers, there is a need to move it more in the direction of flexibility and agility where firms do not have to become less efficient to mitigate risks. the role of collaboration (scholten & schilder, 2015) , which is related to visibility, might also be interesting to emphasize from a policy perspective (having in mind governments as potential actors in this collaboration). the second question is whether solutions are at the firm level or the gvc level. the risk management literature focuses on making firms resilient, and this can be measured by the time they take to recover from a disruption. according to martins de sá et al. (2020) , resilience in the value chain does not depend on the organizational features of the supply chain but rather on efficient risk management strategies in firms that are able to reconfigure the value chain to mitigate the disruptions. here it might be useful to refer to gvc analysis and to the different models of governance of supply chains (gereffi, humphrey, & sturgeon, 2005) . if a lead firm controls the whole value chain (captive value chain or vertically integrated value chain), ensuring that the lead firm has the capabilities needed for effective risk mitigation might be enough to create resilience all along the supply chain. it might also be the case in some relational value chains where the same is achieved through collaboration. in the case of market linkages and modular value chains, one may have to distinguish the resilience of the supply chain from the resilience of specific firms. moving from policy proposals focusing mainly on the design of supply chains (e.g., to make them shorter, more domestic, and more diversified) to proposals enhancing the capabilities of firms (e.g., to help them to develop flexibility and agility, as well as visibility in their supply chains) requires clarifying the intersection between firms and gvcs (pananond, gereffi and pedersen, 2020) . on the one hand, the concept of resilience at the gvc level (the way policymakers understand it, i.e., value chains for a large range of final producers belonging to the same broad industry, such as medical supplies or food) is more difficult to define, as different firms will recover at a different pace after a disruption (and not all of them might be affected in the first place). in theory, the production of final goods can only resume when production all along the value chain starts again, but it can still leave some final producers and input suppliers at different stages of recovery when considering gvcs of a whole industry. on the other hand, the type of resilience that is discussed by policymakers (and which is more about reducing and diversifying risks rather than shortening the time to recover from a disruption) might be easier to achieve at the gvc level. for example, reducing the dependence on inputs from a specific partner country can be the result of different firms sourcing from different countries while not asking each firm to diversify suppliers (ferrarini & hummels, 2014) . the third question is how governments can influence sourcing decisions or capabilities of firms, as well as the organization of gvcs. it should be noted that this question is the same whether one is promoting reshoring or agility. it is a traditional question in the literature in relation to the design of gvc-oriented industrial policy (gereffi & sturgeon, 2013) , the public/private governance of value chains (bair, 2017) , the role of the state in gvcs (horner & alford, 2019) , as well as the impact of investment and trade regimes on decisions of mnes (buckley, 2018; rugman & verbeke, 2017) . it may receive different answers based on whether governments want to encourage or constrain firms in adopting specific strategies. leaving aside the option where governments themselves become actors in gvcs (through government ownership), constraints (e.g., tariffs, taxes) or incentives to firms (e.g., subsidies, tax breaks) inevitably lead to economic distortions. it would be a paradox to resort to such mechanisms when the origin of current trade tensions and policy uncertainties for investment lies in market-distorting government support and when several countries highlight the need for levelling the playing field (oecd, 2019). some governments might still follow this path, but a more coherent policy framework that would not suggest increasing policy risks and costs for companies in order to build resilience would have to rely on a two-pronged approach. first, as highlighted before, there are a series of policies, such as trade facilitation or the regulation of transport and infrastructure services, where the government is directly in charge of setting the rules and can create the conditions for firms to mitigate risks and increase their agility (e.g., by eliminating red tape, creating emergency certification procedures, etc.). the reduction of policy uncertainties, including at the global level, is also in the hands of governments, although more dependent on the success of international cooperation (which is not warranted in the current geopolitical environment). second, some dialogue with the private sector and possibly the organization of public-private platforms at the level of gvcs (hoekman, 2014) can allow governments to encourage firms to put more emphasis on the issue of resilience, while not introducing financial constraints or incentives. different types of incentives can be provided, such as with the organization of 'stress tests' that would put companies in the position of proving that they have taken the necessary steps to be resilient, particularly in the context of the production of essential goods (simchi-levi & simchi-levi, 2020) . such stress tests could also provide information to governments to organize their own risk management strategies (e.g., the right level of national stockpiling to meet a surge in demand beyond the capacity of firms to ramp up their production) and to improve their policies (e.g., information on policy-related costs encountered by firms in their operations). a gvc-level dialogue would also allow firms to cooperate among themselves to be better prepared for risks. the basis for developing and promoting such policy proposals would be a new narrative with the following elements: (1) covid-19 has confirmed interdependencies between economies. there are risks inherent to these interdependencies but they are also the source of growth and development. at this stage, there is no reason to believe that reducing interdependencies would reduce the exposure of economies to risks. on the contrary, simulations suggest that not only the income of countries would be lower but also more volatile. (2) there are concrete issues that can be addressed by policymakers at the gvc level for economies to be better prepared for risks in the future. these issues do not require a new paradigm for gvcs but may involve their restructuring in a process driven by companies and tailored to the specific conditions they operate in. three of them were discussed in the first section of the paper: international supply chain risks and contagion effects, surges in demand for essential goods, and disruptions in trade and transport networks. in these three areas, a gvc perspective makes sense and a combination of actions by firms and governments can mitigate the impact of the next crises. (3) there is no trade-off between efficiency and lower risk. there are trade-offs between different types of risks and firms have to balance the costs and benefits of risk management. however, the most efficient firms are also the ones that are the best at mitigating risks (sheffi, 2015) . promoting agility and flexibility is an agenda that can serve both the objective of resilience and economic recovery after covid-19. (4) the location of production is a complex issue where risk is one determinant among others. there is no rationale for suggesting a specific organization of gvcs that would create resilience, but one type of risk that governments can control is policy risk. they can diminish uncertainties within their domestic economy and rely on international cooperation to reduce international policy risks and trade tensions. conclusion in its 2020 world investment report, unctad (2020) is already predicting that reshoring, diversification, and regionalization will drive the restructuring of gvcs in the coming years. it might be premature, as these strategies have been proposed in columns or opinion pieces and are not grounded in business experience, research, and analytical work. calls for more resilient supply chains have been heard before, after 2001 when the emphasis was on risks related to terrorism or after 2011 when the emphasis was on natural disasters. businesses that nowadays are described to be focused too much on efficiency and insufficiently prepared for risks related to hyper-globalization have already been through many crises that have prompted them to act. still, covid-19 is an unprecedented crisis, and its global scale might lead more companies to rethink their strategies and to put more emphasis on risk management. companies that have been through this process in the past have not resorted to reshoring or regionalization and have not significantly diversified their supply chains. what could be different this time is that firms also have to adjust to deep changes in their environment, such as the digital transformation, climate change, or rising protectionism and trade tensions. therefore, we will see some structural shifts in the organization of global supply chains and covid-19 might be an accelerator of these shifts in some cases. it is too early, however, to predict what solutions will lead businesses to thrive in this uncertain environment. still, it is useful to have the current debate on the policy implications of covid-19 for global supply chains. first, this debate can prevent governments from making the wrong policy choices in the future. that is, there is an opportunity for researchers to convey to policymakers relevant knowledge that will improve their policies or prevent them from making mistakes. second, the overlap between the debate on the resilience of supply chains and the debate on protectionism and economic nationalism can also offer new ways of addressing concerns about globalization. while building more resilient gvcs could be used as a pretext for protectionist policies, it is a two-edged sword. demonstrating that domestic value chains are increasing certain types of risks or that international sourcing can improve the access to essential goods would not only reduce the appeal of protectionist policies but do it on different grounds than just pointing at a welfare loss. third, new research on global supply chains and risk mitigation during covid-19 could provide novel insights, as well as new policy recommendations. different questions could also be examined, not being constrained by the initial emphasis on reshoring and redundancy. for example, the reshoring debate focuses on resilient value chains for developed countries and does not take into account developing and emerging economies. these countries would not only lose some economic activity if reshoring was the new normal, but would also face a more difficult access to essential goods when they are produced by mnes from developed countries. the author is writing in a strictly personal capacity. the views expressed do not reflect those of the oecd secretariat or the member countries of the oecd. the author is grateful to the editor, ari van assche, and to two anonymous referees for their many helpful comments and suggestions. in the risk management literature, resilience is defined as ''the ability of a system to return to its original state or move to a new more desirable state after being disturbed'' (christopher & peck, 2004) . in the supply chain, resilience is about reducing the time it takes for companies to resume normal production once a disruption has occurred. it is different from 'robustness', which is the ability of supply chains to maintain their function despite internal or external disruptions (brandon-jones, squire, autry, & petersen, 2014). authors who ask for more resilient supply chains in the context of covid-19 are often mistaking resilience for robustness. they focus on the description of the disruptions but do not report how quickly international supply chains have generally adjusted, which is the sign of their resilience. on the policy implications of robustness versus resilience in gvcs, see miroudot (2020). the debate on china overlaps with another type of risk that is not related to covid-19 but to trade tensions between the united states and china. there is evidence that an increasing number of companies are moving their production out of china to avoid trade barriers imposed on chinese exports or potential political pressures (baker mckenzie, 2020) . the fact that reshoring and domestic production are suggested to build more resilient supply chains is likely to be linked to economic nationalism and the anti-globalization sentiment. the issue of concentration of production, which is relevant for supply chain disruptions, is also generally analyzed only in relation to china. 3 see timmer et al. (2016) for the calculation of the import intensity of production. the ratio indicates for each dollar of output the value of all intermediate inputs traded upstream in the value chain. it was calculated for 2015 (latest year available) with data from the oecd trade in value added (tiva) database. data for the eu are based on the euro area only. 4 business surveys generally indicate high rates of disruptions but without an indication of the consequences of these disruptions (e.g., whether firms have stopped producing or not). see for example the surveys conducted by the institute for supply management (www.instituteforsupply management.org), the data collected on firmlevelrisk.com and mckinsey global institute (2020). 5 the expression 'supplier diversification' is only partially a synonym for redundancy. redundancy suggests that there are (at least) two suppliers for the same input (in different locations). supplier diversification can also be understood as diversifying the sources of supply by working with different suppliers in different countries but each of them providing different inputs (i.e., maintaining single sourcing). it can spread the supply chain risk but does not offer the same level of business continuity when one of these suppliers fails to provide the inputs. 6 the argument is also about shorter supply chains, which can be understood as a regionalization of production and not just relying on domestic suppliers. the relationship between distance and risk is linked to regional integration and economic cooperation among countries, which can reduce the policy and institutional risk. cultural factors could also play a role with lower transaction costs and easier co-operation between firms when there is some cultural proximity. see e.g., shenkar (2012) for a discussion of cultural distance. trade in value-added terms, the relationship between trade and investment, and the trade policy implications of global value chains. he holds a phd in international economics from sciencespo. supply chains reimagined: recovery and renewal in asia pacific and beyond manufacturing reshoring and its limits: the uk automotive case contextualising compliance: hybrid governance in global value chains supply chain contagion waves: thinking ahead on manufacturing 'contagion and reinfection' from the covid concussion what a difference a word makes: understanding threats to performance in a vucaworld the impact of covid-19 international travel restrictions on services-trade costs: some illustrative scenarios lean manufacturing: literature review and research issues internet of things -the future of managing supply chain risks. supply chain management global supply chains in the pandemic a contingent resource-based perspective of supply chain resilience and robustness blindsided on the supply side towards a theoretically-based global foreign direct investment policy regime risk propensity in the foreign direct investment location decision of emerging multinationals global value chains in a post-crisis world: a development perspective real options theory in international business building the resilient supply chain supply chain contagion and the role of industrial policy toward an eclectic theory of international production: some empirical tests chinese whispers: covid-19, supply chains in essential goods, and public policy asia and global production networks: implications for trade, incomes and economic vulnerability covid-19: expanding access to essential supplies in a value chain world inside samsung's fight to keep its global supply chain running what does the covid-19 pandemic teach us about global value chains? the case of medical supplies the governance of global value chains global value chain-oriented industrial policy: the role of emerging economies the coronavirus will reveal hidden vulnerabilities in complex global supply chains managing cyber risk in supply chains: a review and research agenda supply chains, mega-regionals and multilateralism: a roadmap for the wto the roles of the state in global value chains recovering from supply interruptions: the role of sourcing strategies global supply chains will not be the same in the post-covid-19 world a review of the literature on the principles of enterprise and supply chain resilience: major findings and directions for future research just-in-time' manufacturing systems: a literature review how globalization became a thing that goes bump in the night operating flexibility, global manufacturing, and the option value of a multinational network here's how global supply chains will change after covid-19 coronavirus is proving we need more resilient supply chains international connectedness and local disconnectedness: mne strategy, city-regions and disruption global supply chain risk management supply chain resilience: the whole is not the sum of the parts risk, resilience, and rebalancing in global value chains case study: cisco addresses supply chain risk management resilience versus robustness in global value chains: some policy implications zoom in, zoom out: geographic scale and multinational activity the modern supply chain is snapping. the atlantic how to pandemic-proof globalization: redundancy, not re-shoring, is the key to supply chain security oecd. 2020a. oecd economic outlook 2020 oecd. 2020b. shocks, risks and global value chains: insights from the oecd metro model the face mask global value chain in the covid-19 outbreak: evidence and policy lessons trade facilitation and the covid-19 pandemic levelling the playing field an integrative typology of global strategy and global value chains: the management and organization of cross-border activities the evolution of resilience in supply chain management: a retrospective on ensuring supply chain resilience ensuring supply chain resilience: development of a conceptual framework managing global production: theory and evidence from just-in-time supply chains agile manufacturing: a systematic review of literature and implications for future research global corporate strategy and trade policy the resilient enterprise: overcoming vulnerability for competitive advantage building a resilient supply chain the power of resilience. how the best companies manage the unexpected cultural distance revisited: towards a more rigorous conceptualization and measurement of cultural differences is it time to rethink globalized supply chains? sloan management review collaboration in supply chain resilience we need a stress test for critical supply chains just in time or just in case? an explanatory model with informational and incentive effects the 2020 covid-19 pandemic and global value chains 3m cuts inventory by $370m a dynamic capabilities-based entrepreneurial theory of the multinational enterprise an anatomy of the global trade slowdown based on the wiod 2016 release world investment report, 2020 -international production beyond the pandemic from the editor: steering a policy turn in international business -opportunities and challenges from the editor: covid-19 and international business policy international business in a vuca world: the changing role of states and firms drivers and barriers to reshoring: a literature review on offshoring in reverse world development report 2020 -global value chains: trading for development trade in services in the context of covid-19 about the author sébastien miroudot is senior trade policy analyst in the trade in services division of the oecd trade and agriculture directorate. he was previously a research assistant at groupe d'economie mondiale and taught in the master's degree programme at sciencespo, paris. during 2016-2017, he was visiting professor at the graduate school of international studies of seoul national university. at the oecd, his current work is on the measurement of key: cord-001781-afg1nmib authors: saksena, sumeet; fox, jefferson; epprecht, michael; tran, chinh c.; nong, duong h.; spencer, james h.; nguyen, lam; finucane, melissa l.; tran, vien d.; wilcox, bruce a. title: evidence for the convergence model: the emergence of highly pathogenic avian influenza (h5n1) in viet nam date: 2015-09-23 journal: plos one doi: 10.1371/journal.pone.0138138 sha: doc_id: 1781 cord_uid: afg1nmib building on a series of ground breaking reviews that first defined and drew attention to emerging infectious diseases (eid), the ‘convergence model’ was proposed to explain the multifactorial causality of disease emergence. the model broadly hypothesizes disease emergence is driven by the co-incidence of genetic, physical environmental, ecological, and social factors. we developed and tested a model of the emergence of highly pathogenic avian influenza (hpai) h5n1 based on suspected convergence factors that are mainly associated with land-use change. building on previous geospatial statistical studies that identified natural and human risk factors associated with urbanization, we added new factors to test whether causal mechanisms and pathogenic landscapes could be more specifically identified. our findings suggest that urbanization spatially combines risk factors to produce particular types of peri-urban landscapes with significantly higher hpai h5n1 emergence risk. the work highlights that peri-urban areas of viet nam have higher levels of chicken densities, duck and geese flock size diversities, and fraction of land under rice or aquaculture than rural and urban areas. we also found that land-use diversity, a surrogate measure for potential mixing of host populations and other factors that likely influence viral transmission, significantly improves the model’s predictability. similarly, landscapes where intensive and extensive forms of poultry production overlap were found at greater risk. these results support the convergence hypothesis in general and demonstrate the potential to improve eid prevention and control by combing geospatial monitoring of these factors along with pathogen surveillance programs. two decades after the institute of medicine's seminal report [1] recognized novel and reemerging diseases as a new category of microbial threats, the perpetual and unexpected nature of the emergence of infectious diseases remains a challenge in spite of significant clinical and biomedical research advances [2] . highly pathogenic avian influenza (hpai) (subtype h5n1) is the most significant newly emerging pandemic disease since hiv/aids. its eruption in southeast asia in 2003-4 and subsequent spread globally to more than 60 countries fits the complex systems definition of "surprise" [3] . in this same year that iom had published its final report on microbial threats which highlighted h5n1's successful containment in hong kong in 1997 [4] , massive outbreaks occurred in southeast asia where it remains endemic, along with egypt's nile delta. since 2003, hpai h5n1 has killed millions of poultry in countries throughout asia, europe, and africa, and 402 humans have died from it in sixteen countries according to who data as of january 2015. the threat of a pandemic resulting in millions of human cases worldwide remains a possibility [5] . lederberg et al. [1] first pointed to the multiplicity of factors driving disease emergence, which later were elaborated and described in terms of 'the convergence model' [6] . the model proposes emergence events are precipitated by the intensifying of biological, environmental, ecological, and socioeconomic drivers. microbial "adaptation and change," along with "changing ecosystems" and "economic development and land use" form major themes. joshua lederberg, the major intellectual force behind the studies summed-up saying "ecological instabilities arise from the ways we alter the physical and biological environment, the microbial and animal tenants (humans included) of these environments, and our interactions (including hygienic and therapeutic interventions) with the parasites" [6] . combining such disparate factors and associated concepts from biomedicine, ecology, and social sciences in a single framework remains elusive. one approach suggested has been to employ social-ecological systems theory that attempts to capture the behavior of so-called 'coupled natural-human systems', including the inevitable unexpected appearance of new diseases, themselves one of the "emerging properties" of complex adaptive systems (cas) [7, 8] . the convergence model can be so adapted by incorporating the dynamics of urban, agricultural, and natural ecosystem transformations proposed with this framework. these associated multifaceted interactions including feedbacks that affect ecological communities, hosts and pathogen populations, are the proximate drivers of disease emergence. the initial hpai h5n1 outbreaks in vietnam represent an ideal opportunity to adapt and test a cas-convergence model. emergence risk should be highest in the most rapidly transforming urban areas, peri-urban zones where mixes of urban-rural, modern-traditional land uses and poultry husbandry coincide most intensely. specifically we hypothesized a positive association between the presence of hpai outbreaks in poultry at the commune level and: 1) peri-urban areas, as defined by saksena et al. [9] , 2) land-use diversity, and 3) co-location of intensive and extensive systems of poultry. we used the presence or absence at the commune level of hpai h5n1 outbreaks in poultry as the dependent variable. vietnam experienced its first hpai h5n1 outbreak in late 2003, since then, there have been five waves and sporadic outbreaks recorded over the years [10, 11] . we chose to study the first wave (wave 1) that ended in february 2004 and the second wave (wave 2) that occurred between december 2004 and april 2005. we used data from the viet nam 2006 agricultural census to develop an urbanicity classification that used data collected at a single point in time (2006) but across space (10,820 communes) to infer processes of change (urbanization, land-use diversification, and poultry intensification) [9] . the 58 provinces in vietnam (not counting the 5 urban provinces that are governed centrally) are divided into rural districts, provincial towns, and provincial cities. rural districts are further divided into communes (rural areas) and towns, and provincial towns and cities are divided into wards (urban subdistricts) and communes. a commune in viet nam is thus the third level administrative subdivision, consisting of villages/hamlets. for the purpose of simplicity we will henceforth use the term "commune" to refer to the smallest administrative unit whether it is a commune, town, or ward. we included risk factors documented in previous work. we also aimed to understand the differences, if any, in risk dynamics at different scales; comparing risks at the national scale to those at two sub-national agro-ecological zones. for this purpose we chose to study the red river and mekong river deltas, well known hot spots of the disease. hence we conducted two sets of analyses (waves 1 and 2) for three places (nation, red river delta, and mekong delta) producing a total of 6 wave-place analyses. data on outbreaks were obtained from the publicly available database of viet nam's department of animal health. given the highly complex dynamics of the epidemics and in keeping with recent methodological trends, we used multiple modeling approaches-parametric and non-parametric-with a focus on spatial analysis. we used both 'place' oriented models that can take into account variations in factors such as policies and administration as well as 'space' oriented models that recognize the importance of physical proximity in natural phenomenon [12] . very few empirical studies have attempted to determine whether urbanization is related to eid outbreaks or whether urbanization is associated primarily with other factors related to eid outbreaks. one immediate problem researchers face is defining what is rural, urban, and transitional (i.e., peri-urban). some studies have used official administrative definitions of urban and rural areas, but this approach is limited in its bluntness [13] . other studies prioritized human population density as a satisfactory surrogate [11, [14] [15] [16] [17] [18] [19] [20] , but this approach ignores the important fact that density is not a risk factor if it is accompanied by sufficient infrastructure to handle the population. spencer [21] examined urbanization as a non-linear characteristic, using household-level variables such as water and sanitation services. he found evidence that increased diversity in water supply sources and sanitation infrastructure were associated with higher incidences of hpai. these studies employed a limited definition of urbanization that lacked a well-defined characterization of peri-urbanization. still other studies have mapped the relative urban nature of a place, a broad concept that is often referred to as 'urbanicity' [22] [23] [24] [25] . while these studies show differences in the rural/ urban nature of communities across space and time, they have been limited to small-to medium-scale observational studies; and they have failed to distinguish between different levels of "ruralness". perhaps the best known model of peri-urbanization is mcgee's concept of desakota (indonesian for "village-town") [26] . mcgee identified six characteristics of desakota regions: 1) a large population of smallholder cultivators; 2) an increase in non-agricultural activities; 3) extreme fluidity and mobility of population; 4) a mixture of land uses, agriculture, cottage industries, suburban development; 5) increased participation of the female labor force; and 6) "grey-zones", where informal and illegal activities group [26] . saksena et al. [9] built on mcgee's desakota concepts and data from the 2006 viet nam agricultural census to establish an urbanicity classification. that study identified and mapped the 10,820 communes, the smallest administrative unit for which data are collected, as being rural, peri-urban, urban, or urban core. this project used the saksena classification to assess associations between urbanicity classes, other risks factors, and hpai outbreaks. researchers have estimated that almost 75% of zoonotic diseases are associated with landcover and land-use changes (lcluc) [27, 28] . lcluc such as peri-urbanization and agricultural diversification frequently result in more diverse and fragmented landscapes (number of land covers or land uses per unit of land). the importance of landscape pattern, including diversity and associated processes, which equate to host species' habitat size and distribution, and thus pathogen transmission dynamics is axiomatic though the specific mechanisms depend on the disease [29, 30] . landscape fragmentation produces ecotones, defined as abrupt edges or transitions zones between different ecological systems, thought to facilitate disease emergence by increasing the intensity and frequency of contact between host species [31] furthermore, fragmentation of natural habitat tends to interrupt and degrade natural processes, including interspecies interactions that regulate densities of otherwise opportunistic species that may serve as competent hosts [32] , although it is not clear if reduced species diversity necessarily increases pathogen transmission [33] . rarely has research connected land-use diversification to final health endpoints in humans or livestock; this study attempts to link land-use diversity with hpai h5n1 outbreaks. human populations in the rapidly urbanizing cities of the developing world require access to vegetables, fruits, meat, etc. typically produced elsewhere. as theorized by von thünen in 1826 [34] , much of this demand is met by farms near cities [35] , many in areas undergoing processes of peri-urbanization [26] . due to the globalization of poultry trade, large-scale chicken farms raising thousands of birds have expanded rapidly in southeast asia and compete with existing small backyard farmers [36] . large, enterprise-scale (15,000-100,000 birds) operations are still rare in viet nam (only 33 communes have such a facility). on the other hand, domestic and multinational companies frequently contract farmers to raise between 2,000 and 15,000 birds. recent studies have examined the relative role of extensive (backyard) systems and intensive systems [15, [17] [18] [19] 37] . in much of asia there is often a mix of commercial and backyard farming at any one location [36] . experts have suggested that from a biosecurity perspective the co-location of extensive and intensive systems is a potential risk factor [38] . intensive systems allow for virus evolution (e.g. low pathogenic avian influenza to hpai) and transformation, while extensive systems allow for environmental persistence and circulation [39] . previous studies of chicken populations as a risk factor have distinguished between production systems-native chickens, backyard chickens; flock density; commercial chickens, broilers and layers density, etc. [15, [17] [18] [19] 37] . in isolation, however, none of these number and/or density based poultry metrics adequately measures the extent of co-location of intensive and extensive systems in any given place. intensive and extensive systems in viet nam have their own fairly well defined flock sizes. a diversity index of the relative number of intensive and extensive systems of poultry-raising can better estimate the effect of such co-location; this study attempts to link a livestock diversity index with the presence or absence of hpai h5n1 outbreaks at the commune level. this study investigated for the 10,820 communes of viet nam a wide suite of socio-economic, agricultural, climatic and ecological variables relevant to poultry management and the transmission and persistence of the hpai virus. many of these variables were identified based on earlier studies of hpai (as reviewed in gilbert and pfeiffer [40] ). three novel variables were included based on hypotheses generated by this project. all variables were measured or aggregated to the commune level. the novel variables were: • degree of urbanization: we used the urbanicity classification developed by saksena et al. [9] to define the urban character of each commune. the classification framework is based on four characteristics: 1) percentage of households whose main income is from agriculture, aquaculture and forestry, 2) percentage of households with modern forms of toilets, 3) percentage of land under agriculture, aquaculture and forestry and 4) the normalized differentiated vegetation index (ndvi). the three-way classification enabled testing for non-linear and non-monotonous responses. • land-use diversity: we measured land-use diversity using the gini-simpson diversity index [41] . the gini-simpson diversity index is given by 1-λ, where λ equals the probability that two entities taken at random from the dataset of interest represent the same type. in situations with only one class (complete homogeneity) the gini-simpson index would have a value equal to zero. such diversity indices have been used to measure land-use diversity [42] . we used the following five land-use classes: annual crops, perennial crops, forests, aquaculture and built-up land (including miscellaneous uses) for which data were collected in the 2006 agricultural census. the area under the last class was calculated as the difference between the total area and the sum of the first four classes. the following variables are listed according to their role in disease introduction, transmission and persistence, though some of these factors may have multiple roles. • human population related transmission. human population density [11, 14-16, 18, 19, 44, 45] . • poultry trade and market. towns and cities were assumed to be active trading places [10, 18, 37, 44, 46] . so, the distance to the nearest town/city was used as indicator of poultry trade. trade is facilitated by access to transportation infrastructure [37, 47, 48] . so, the distance to the nearest a) national highway and b) provincial highway was used as indicator of transportation infrastructure. • disease introduction and amplification. the densities of chicken were calculated based on commune area [15, 19, 37, 49] . • intermediate hosts. duck and geese densities were calculated using total commune area [11, 19, 49] . as previous studies have shown a link between scavenging in rice fields by ducks and outbreaks, we also calculated duck density using only the area under rice. • agro-ecological and environmental risk factors. previous studies have shown that the extent of rice cultivation is a risk factor, mainly due its association with free ranging ducks acting as scavengers [10] . we used percentage of land under rice cultivation as a measure of extent. rice cropping intensity is also a known risk factor [11, 17, 37] . we used the mean number of rice crops per year as a measure of intensity. the extent of aquaculture is a known risk factor [10] , possibly because water bodies offer routes for transmission and persistence of the virus. the percentage of land under aquaculture was used as a metric. proximity to water bodies increases the risk of outbreaks [47, [50] [51] [52] , possibly by increasing the chance of contact between wild water birds and domestic poultry. we measured the distance between the commune and the nearest: a) lake and b) river. climatic variables-annual mean temperature and annual precipitation-have been associated with significant changes in risk [48, 53] . elevation, which is associated with types of land cover and agriculture, has been shown to be a significant risk factor in vietnam [10] . compound topographical index (cti, also known as topographical wetness index) is a measure of the tendency for water to pool. studies in thailand and elsewhere [54] have shown that the extent of surface water is a strong risk factor, possibly due to the role of water in long-range transmission and persistence of the virus. in the absence of reliable and inexpensive data on the extent of surface water we used cti as a proxy. cti has been used in ecological niche models (enm) of hpai h5n1 [55, 56] . however, given the nature of enm studies, the effect of cti as a risk factor has been unknown so far. cti has been used as a risk factor in the study of other infectious and non-infectious diseases [57] . some studies have shown that at local scales, the slope of the terrain (a component of cti) was significantly correlated with reservoir species dominance [58] . cti is a function of both the slope and the upstream contributing area per unit width orthogonal to the flow direction. cti is computed as follows: cti = ln (a s / (tan (β)) where; a s = area value calculated as ((flow accumulation + 1) ã (pixel area in m 2 )) and β is the slope expressed in radians [59] . though previous studies have indicated that normalized difference vegetation index (ndvi) is a risk factor [10, 20, 55, 60, 61], we did not include it explicitly in our models, as the urban classification index we used included ndvi [9] . we obtained commune level data on hpai h5n1 outbreaks from the publicly available database of the department of animal health [10] . viet nam experienced its first major epidemic waves between december 2003 and february 2006 [10] . we chose to study the first wave (wave 1) that ended in february 2004 and the second wave (wave 2) that occurred between december 2004 and april 2005. in wave 1, 21% of the communes and in wave 2, 6% of the communes experienced outbreaks. we used data from the 1999 population census of viet nam to estimate human population per commune. we relied on data from two agriculture censuses of viet nam. this survey is conducted every five years covering all rural households and those peri-urban households that own farms. thus about three-fourths of all of the country's households are included. the contents of the survey include number of households in major production activities, population, labor classified by sex, age, qualification, employment and major income source; agriculture, forestry and aquaculture land used by households classified by source, type, cultivation area for by crop type; and farming equipment by purpose. commune level surveys include information on rural infrastructure, namely electricity, transportation, medical stations, schools; fresh water source, communication, markets, etc. detailed economic data are collected for large farms. we used the 2006 agriculture census for most variables because the first three epidemic waves occurred between the agricultural censuses of 2001 and 2006 but were closer in time to the 2006 census [10] . however, for data on poultry numbers we used the 2001 agriculture census data set because between 1991 and 2003 the poultry population grew at an average rate of 7% annually. however, in 2004, after the first wave of the h5n1 epidemic, the poultry population fell 15%. only by mid-2008 did the poultry population return close to pre-epidemic levels. thus, we considered the poultry population data from the 2001 census to be more representative. we aggregated census household data to the commune level. a three-way classification of the rural-to-urban transition was based on a related study [9] . raster data on annual mean temperature and precipitation were obtained from the world-clim database and converted to commune level data. the bioclimatic variables were compiled from the monthly temperature and precipitation values and interpolated to surfaces at 90m spatial resolution [62] . this public database provides data on the average climatic conditions of the period 1950-2000. elevation was generated from srtm 90 meter digital elevation models (dem) acquired from the consortium for spatial information (cgiar-csi). compound topographical index (cti) data were generated using the geomorphometry and gradient metrics toolbox for arc-gis 10.1. prior to risk factor analysis we cleaned the data by identifying illogical values for all variables and then either assigning a missing value to them or adjusting the values. illogical values occurred mainly (less than 1% of the cases) for land-related variables such as percentage of commune land under a particular type of land use. next we tested each variable for normality using the bestfit software (palisade corporation). most of the variables were found to follow a log-normal distribution and a log-transform was used on them. we then examined the bi-variate correlations between all the risk factors (or their log-transform, as the case may be). correlations were analyzed separately for each place. certain risk factors were then eliminated from consideration when |r| ! 0.5 (r is the pearson correlation coefficient). when two risk factors were highly correlated, we chose to include the one which had not been adequately studied explicitly in previously published risk models. notably, we excluded a) elevation (correlated with human population density, chicken density, duck density, percentage land under paddy, annual temperature and compound topographical index), b) human population density (correlated with elevation and cti), c) chicken density (only at national level, correlated with cti), d) duck and goose density (correlated with elevation, chicken density, percentage land under paddy, land use diversity index and cti), e) annual temperature (correlated with elevation and cti) and f) cropping intensity (correlated with percentage land under paddy). considering the importance of spatial autocorrelation in such epidemics, we used two modeling approaches: 1) multi-level generalized linear mixed model (glmm) and 2) boosted regression trees (brt) [63, 64] with an autoregressive term [65] . glmm is a 'place' oriented approach that is well suited to analyzing the effect of administrative groupings, while brt is a 'space' oriented approach that accounts for the effects of physical proximity. we began by deriving an autoregressive term by averaging the presence/absence among a set of neighbors defined by the limit of autocorrelation, weighted by the inverse of the euclidean distance [65] . the limit of the autocorrelation of the response variable was obtained from the range of the spatial correlogram ρ (h) [66] . to determine which predictor variables to include in the two models, we conducted logistic regression modeling separately for each of them one by one but included the autoregressive term each time. we finally included only those variables whose coefficient had a significance value p 0.2 (in at least one wave-place combination) and we noted the sign of the coefficient. this choice of p value for screening risk factors is common in similar studies [15, 18, 45, 67] . we used a two-level glmm (communes nested under districts) to take account of random effects for an area influenced by its neighbors, and thus, we studied the effect of spatial autocorrelation. we used robust standard errors for tests of fixed effects. boosted regression trees, also known as stochastic gradient boosting, was performed to predict the probability of hpai h5n1 occurrence and determine the relative influence of each risk factor to the hpai h5n1 occurrence. this method was developed recently and applied widely for distribution prediction in various fields of ecology [63, 64] . it is widely used for species distribution modeling where only the sites of occurrence of the species are known [68] . the method has been applied in numerous studies for predicting the distribution of hpai h5n1 disease [16, 51, [69] [70] [71] . brt utilizes regression trees and boosting algorithms to fit several models and combines them for improving prediction by performing iterative loop throughout the model [63, 64] . the advantage of brt is that it applies stochastic processes that include probabilistic components to improve predictive performance. we used regression trees to select relevant predictor variables and boosting to improve accuracy in a single tree. the sequential process allows trees to be fitted iteratively through a forward stage-wise procedure in the boosting model. two important parameters specified in the brt model are learning rate (lr) and tree complexity (tc) to determine the number of trees for optimal prediction [63, 64] . in our model we used 10 sets of training and test points for cross-validation, a tree complexity of 5, a learning rate of 0.01, and a bag fraction of 0.5. other advantages of brt include its insensitivity to co-linearity and non-linear responses. however, for the sake of consistency with the glmm method, we chose to eliminate predictors that were highly correlated with other predictors and to make log-transforms where needed. in the glmm models we used p 0.05 to identify significant risk factors. the predictive performances of the models were assessed by the area under the curve (auc) of the receiver operation characteristic (roc) curve. auc is a measure of the overall fit of the model that varies from 0.5 (chance event) to 1.0 (perfect fit) [72] . a comparison of auc with other accuracy metrics concluded that it is the most robust measure of model performance because it remained constant over a wide range of prevalence rates [73] . we used the corrected akaike information criteria (aicc) to compare each glmm model with and without its respective suite of fixed predictors. we used spss version 21 (ibm corp., new york, 2012) for glmm and r version 3.1.0 (the r foundation for statistical computing, 2014) for the brt. for calculating the spatial correlogram we used the spdep package of r. the fourteen predictor variables we modeled (see tables) were all found to be significantly associated with hpai h5n1 outbreaks (p 0.2) in at least one wave-place combination based on univariate analysis (but including the autoregressive term) ( table 1) . land-use diversity, chicken density, poultry flock size diversity and distance to national highway were found to have significant associations across five of the six wave-place combinations. power of the glmm models, as measured by the auc, is very good with auc values ranging from 0.802 to 0.952 (tables 2-7 ). the predictive power of the national models was higher than that of the delta models. the predictive power of the brt models is good, with aucs ranging from 0.737 to 0.914. the brt models also had a better predictive power at the national level than at the delta level. these values are higher than those reported for wave 1 (auc = 0.69) and wave 2 (auc = 0.77) by gilbert et al. [11] . both gilbert et al. [11] and this study found that at the national level the predictive performance for wave 2 was higher than that for wave 1. wave 2 mainly affected the mekong river delta. previous studies indicated the duck density was an important predictor [11] ; our results, however, indicated that the diversity of duck flock size was a more important predictor than duck density. both the glmm and brt models found annual precipitation to be a significant factor. the glmm model indicated a negative association; similar to what was found by studies in china [51] and in the red river delta [53] . a global study of human cases also found occurrence to be higher under drier conditions [74] . generally, the role of precipitation was found to be far more significant in the deltas than for the country as a whole. the unadjusted relative risk (rr) of peri-urban areas in comparison with non-peri-urban areas was 1.41 and 1.60 for waves 1 and 2, respectively. in terms of urbanicity, we found that chicken density, percentage of land under rice, percentage of land under aquaculture, flock size diversity for duck and geese, and the compound topographical index (cti) to be highest in peri-urban areas (fig 1a-1e) . we also found that land-use diversity was higher in rural areas, but peri-urban areas had diversity levels only marginally lower (fig 1f) . the urbanicity variable alone, however, was not found to be significantly associated with hpai h5n1 in any place according to the glmm model except for the urban level in red river delta for wave 2 and in the mekong river delta for wave 1. the brt model ranked urbanicity as one of the least influential variables. land-use diversity was found to be significantly associated with hpai h5n1 in both waves for viet nam according to the glmm model, but at the delta level the association was significant only for wave 2 in the mekong river delta. the brt model indicated that land-use diversity highly influenced hpai h5n1 at the national level in wave 2. for the remaining waveplace combinations land-use diversity had middle to below-middle rank of influence. both the glmm and brt models indicated that the diversity of chicken flock-size had a strong association with hpai h5n1 for both waves at the national level. this was generally found to be true at the delta levels with some exceptions. the diversity of duck and goose flock size was also significantly associated with hpai h5n1 in all places, but the associations were much stronger in wave 2 than in wave 1. the glmm model indicated that the cti had a very strong association with hpai h5n1 at the national level in both waves although this was not true in the two deltas. the cti is a steady state wetness index commonly used to quantify topographic control on hydrological processes. accumulation numbers in flat areas, like deltas, are very large; hence the cti was not a relevant variable in the glmm model in these areas. the brt model however indicated that cti had middle to low influence in all waves and places. we found very high spatial clustering effects as indicated by the fact that in all waves and places the brt model found the spatial autocorrelation term to have the highest rank of influence. as expected, the relative influence of the autocorrelation term at the national level was higher (60-78%) than at the delta levels (14-35%). in the glmm models we found the akaike information criterion (aic) using the entire set of 14 variables to be much lower than the aics of a glmm model without fixed effects. this indicated that though clustering effects were significant, our theory driven predictor variables improved model performance. a limitation of using surveillance methods for the dependent variable (poultry outbreaks) is that the data may have reporting/detection biases [11] . under-reporting/detection in rural areas as compared to peri-urban areas is possible. we believe that the urbanicity and the shortest distance to nearest town risk factors serve as rough proxies for reporting/detection efficiency. previous studies have tended to use human population density as a proxy for this purpose. in our study we found a strong association between human population density and urbanicity. but we acknowledge that a categorical variable such as urbanicity may provide less sensitivity than a continuous variable such as human population density in this specific context. this study explored the validity of a general model for disease emergence that combined the iom 'convergence model' [6] and the social-ecological systems model [7, 8] , for investigating the specific case of hpai in vietnam. we sought to test the hypotheses that measures of urbanization, land-use diversification, and poultry intensification are correlated with outbreaks in poultry. our results generally support the hypothesis that social-ecological system transformations are associated with h5ni outbreaks in poultry. the results presented here highlight three main findings: 1) when relevant risk factors are taken into account, urbanization is generally not a significant independent risk factor; but in peri-urban landscapes emergence factors converge, including higher levels of chicken densities, duck and geese flock size diversities, and fraction of land under rice or aquaculture; 2) high land-use diversity landscapes, a variable not previously considered in spatial studies of hpai h5n1, are at significantly greater risk for hpai h5n1 outbreaks; as are 3) landscapes where intensive and extensive forms of poultry production are co-located. only one other study has explicitly examined urbanicity in the context of hpai h5n1. loth et al. [17] found peri-urban areas in indonesia were significantly associated with hpai h5n1 cases, even based on multivariate models. our study, however, attempted both to associate hpai h5n1 with degree of urbanicity and to determine the features of peri-urban areas that place them at risk. when those features (i.e., chicken densities, duck and geese flock size diversities, and the fraction of land under rice or aquaculture) are included in multivariate models, the role of the urbanization variable per se diminishes. we found in the main river deltas in viet nam (red river and mekong), urbanization had no significant association with hpai h5n1. this may be due to the fact that the deltas are more homogenous, in terms of urbanization, than the country as a whole. this is the first study to examine land-use diversity as a risk factor for hpai h5n1. measured by the gini-simpson diversity index of the five land-use classes on which data were collected in the 2006 viet nam agricultural census, and the presence or absence of hpai outbreaks at the commune level, our results indicate a strong association between land-use diversity and hpai h5n1 at the national level and in the mekong river delta. this metric captures both the variety of habitats and of the complexity of geospatial patterning likely associated with transmission intensity. our results are similar to what has been observed by studies of other eids using fragmentation metrics (e.g. [75] [76] [77] . this is one of the few studies, however, to link landscape fragmentation to an eid disease in poultry and not just to the vector and/or hosts of the eid. previous studies have focused on poultry production factors such as type of species, size of flocks, and extent of commercialization (e.g. [15, [17] [18] [19] . this study expands on those findings by providing evidence that when intensive and extensive systems of chicken and/or duck and geese production co-exist in the same commune, the commune experiences higher risk of disease outbreak. future studies need to examine the biological causal mechanisms in this context. we suggest that national census data (particularly agricultural censuses) compiled at local levels of administration provide valuable information that are not available from remotely sensed data (such as poultry densities) or require a large amount of labor to map at national to larger scales (land-use diversity). mapping land-use classes at the national scale for local administrative units (i.e., the 10,820 communes in viet nam) is not an insignificant task. future studies, however, could examine the correlation between a census-based metric with metrics derived from remote sensing used to measure proportional abundance of each landcover type within a landscape [78] . vietnam is relatively advanced in making digital national population and agricultural census data available in a format that can be linked to administrative boundaries. while other nations are beginning to develop similar capacities, in the short term the application of this method to other countries may be limited. ultimately, both census and remotely sensed data can be used independently to map the urban transition and diversity of land use; these tools, however, may provide their greatest insights when used together. another important contribution of this study was the discovery of the importance of cti. so far cti had been used only in ecological niche modeling studies of hpai h5n1; the specific role and direction of influence of cti had has so far been unknown. our study, the first to use cti as a risk factor, found it had a large positive influence on hpai h5n1 risk at the national level. previous studies have highlighted the role of surface water extent in the persistence and transmission of the hpai h5n1 virus. these studies measured surface water extent as area covered by water, magnitude of seasonal flooding, distance to the nearest body of water, or other variables that are often difficult to map using remotely sensed data, especially for large area studies. cti on the other hand has the potential to serve as an excellent surrogate which can easily be measured in a gis database. the national and regional (delta) models differed quite considerably, both in terms of performance and significant risk factors. in the deltas we commonly found only chicken density, duck flock size diversity and annual precipitation to be significant. this suggests dynamics of risk at the commune level are strongly dependent on the spatial range of analysis, consistent with another study in the mekong delta [61] . though that study's model initially included three dozen commonly known risk factors, the significant risk factors were limited to poultry flock density, proportion households with electricity, re-scaled ndvi median may-october, buffalo density and sweet potato yield. another study in the red river delta [79] found that in addition to the typical poultry density metrics, only the presence of poultry traders was significant. we speculate that for smaller regions, especially for known hot-spots, the relevant risk factors are those that reflect short-range, short-term driving forces such as poultry trading, presence of live bird markets and wet markets etc. improving model performance for smaller regions would require highly refined and nuanced metrics for poultry trading, road infrastructure, water bodies, etc.-data that are typically not available through census surveys. the differences between the national and regional models suggest that our results can inform planners making decisions at different hierarchical levels of jurisdiction: national, region and local. our study has the potential to inform the design of future research related to the epidemiology of other eids in viet nam and elsewhere. for example, we speculate that in southeast asia, japanese encephalitis, the transmission of which is associated with rice cultivation and flood irrigation [80] , may also show a strong association with peri-urbanization. in some areas of asia these ecological conditions occur near, or occasionally within, urban centers. likewise, hantaan virus, the cause of korean hemorrhagic fever, is associated with the field mouse apodemus agrarius and rice harvesting in fields where the rodents are present [80] . our work has demonstrated that the percentage of land under rice in peri-urban areas and rural areas is similar. hence diseases associated with rice production are likely to peak in peri-urban areas given other risk factors such as land-use diversity, cti, and distance to infrastructure. our poultry flock-size diversity findings may also be relevant to understanding the dynamics of other poultry related infections such as newcastle disease. finally, these results suggest the validity of a general model of zoonotic disease emergence that integrates iom's convergence model with the subsequently proposed social-ecological systems and eid framework. thus, convergence represents the coalescence in time and space of processes associated with land-cover and land-use changes. project results question whether the urban/rural land-use dichotomy is useful when large areas and parts of the population are caught between the two. planners need better tools for mapping the rural-urban transition, and for understanding how the specific nature of peri-urban environments creates elevated health risk that require adaptation of existing planning, land use, and development practices. committee on emerging microbial threats to health in the 21st century. emerging infections: microbial threats to health in the united states emerging infectious diseases in 2012: 20 years after the institute of medicine report navigating social-ecological systems: building resilience for complexity and change committee on emerging microbial threats to health in the 21st century. microbial threats to health: the threat of pandemic influenza avian influenza virus (h5n1): a threat to human health committee on emerging microbial threats to health in the 21st century. microbial threats to health: emergence, detection, and response emerging and reemerging infectious diseases: biocomplexity as an interdisciplinary paradigm disease ecology and the global emergence of zoonotic pathogens classifying and mapping the urban transition in vietnam an analysis of the spatial and temporal patterns of highly pathogenic avian influenza occurrence in vietnam using national surveillance data mapping h5n1 highly pathogenic avian influenza risk in southeast asia area variations in health: a spatial multilevel modeling approach. health place world development report 2009: reshaping economic geography risk factors of poultry outbreaks and human cases of h5n1 avian influenza virus infection in west java province, indonesia ecologic risk factor investigation of clusters of avian influenza a (h5n1) virus infection in thailand spatial distribution and risk factors of highly pathogenic avian influenza (hpai) h5n1 in china identifying risk factors of highly pathogenic avian influenza (h5n1 subtype) in indonesia risk factors and clusters of highly pathogenic avian influenza h5n1 outbreaks in bangladesh freegrazing ducks and highly pathogenic avian influenza modelling the ecology and distribution of highly pathogenic avian influenza (h5n1) in the indian subcontinent the urban health transition hypothesis: empirical evidence of an avian influenza kuznets curve in vietnam? urbanization and the spread of diseases of affluence in china defining the "urban" in urbanization and health: a factor analysis approach understanding community context and adult health changes in china: development of an urbanicity scale quantifying the urban environment: a scale measure of urbanicity outperforms the urban-rural dichotomy the emergence of desakota in asia: expanding a hypothesis risk factors for human disease emergence global trends in emerging infectious diseases pathogenic landscapes: interactions between land, people, disease vectors, and their animal hosts unhealthy landscapes: policy recommendations on land use change and infectious disease emergence the role of ecotones in emerging infectious diseases ecological consequences of habitat fragmentation: implications for landscape architecture and planning does biodiversity protect humans against infectious disease? wartenber cm. von thunen's isolated state health and peri-urban natural resource production livestock production: recent trends, future prospects anthropogenic factors and the risk of highly pathogenic avian influenza h5n1: prospects from a spatial-based model prospects for emerging infections in east and southeast asia 10 years after severe acute respiratory syndrome zoonosis emergence linked to agricultural intensification and environmental change risk factor modelling of the spatio-temporal patterns of highly pathogenic avian influenza (hpaiv) h5n1: a review diversity and evenness: a unifying notation and its consequences land mosaics: the ecology of landscapes and regions characterization of poultry production systems in vietnam spatio-temporal epidemiology of highly pathogenic avian influenza (subtype h5n1) in poultry in eastern india agro-environmental determinants of avian influenza circulation: a multisite study in thailand, vietnam and madagascar risk factors for highly pathogenic avian influenza (hpai) h5n1 infection in backyard chicken farms risk analysis for the highly pathogenic avian influenza in mainland china using meta-modeling environmental factors contributing to the spread of h5n1 avian influenza in mainland china flying over an infected landscape: distribution of highly pathogenic avian influenza h5n1 risk in south asia and satellite tracking of wild waterfowl environmental and anthropogenic risk factors for highly pathogenic avian influenza subtype h5n1 outbreaks in romania mapping spread and risk of avian influenza a (h7n9) in china risk for infection with highly pathogenic avian influenza virus (h5n1) in backyard chickens spatio-temporal occurrence modeling of highly pathogenic avian influenza subtype h5n1: a case study in the red river delta rivers and flooded areas identified by medium-resolution remote sensing improve risk prediction of the highly pathogenic avian influenza h5n1 in thailand ecology and geography of avian influenza (hpai h5n1) transmission in the middle east and northeastern africa predictable ecology and geography of avian influenza (h5n1) transmission in nigeria and west africa chagas disease risk in texas the effect of habitat fragmentation and species diversity loss on hantavirus prevalence in panama soil-landscape modeling and spatial prediction of soil attributes spatio-temporal dynamics of global h5n1 outbreaks match bird migration patterns risk factors and characteristics of h5n1 highly pathogenic avian influenza (hpai) post-vaccination outbreaks very high resolution interpolated climate surfaces for global land areas a working guide to boosted regression trees novel methods improve prediction of species' distributions from occurrence data an autologistic model for the spatial distribution of wildlife multivariable geostatistics in s: the gstat package ecological determinants of highly pathogenic avian influenza (h5n1) outbreaks in bangladesh species distribution models: ecological explanation and prediction across space and time improving risk models for avian influenza: the role of intensive poultry farming and flooded land during the 2004 thailand epidemic modeling habitat suitability for occurrence of highly pathogenic avian influenza virus h5n1 in domestic poultry in asia: a spatial multicriteria decision analysis approach predicting the risk of avian influenza a h7n9 infection in live-poultry markets across asia principles and practical application of the receiver-operating characteristic analysis for diagnostic tests the effects of species' range sizes on the accuracy of distribution models: ecological phenomenon or statistical artefact? seasonal patterns in human a (h5n1) virus infection: analysis of global cases integrated mapping of establishment risk for emerging vector-borne infections: a case study of canine leishmaniasis in southwest france fragmentation analysis for prediction of suitable habitat for vectors: example of riverine tsetse flies in burkina faso the impact of habitat fragmentation on tsetse abundance on the plateau of eastern zambia spatial pattern analysis program for quantifying landscape structure risk factors of highly pathogenic avian influenza h5n1 occurrence at the village and farm levels in the red river delta region in vietnam factors in the emergence of infectious diseases we thank nargis sultana, university of hawaii, manoa for assistance with compiling a gis database. we thank the following for giving us advice and suggestions on the statistical model key: cord-004091-gex0zvoa authors: abdulkareem, shaheen a.; augustijn, ellen-wien; filatova, tatiana; musial, katarzyna; mustafa, yaseen t. title: risk perception and behavioral change during epidemics: comparing models of individual and collective learning date: 2020-01-06 journal: plos one doi: 10.1371/journal.pone.0226483 sha: doc_id: 4091 cord_uid: gex0zvoa modern societies are exposed to a myriad of risks ranging from disease to natural hazards and technological disruptions. exploring how the awareness of risk spreads and how it triggers a diffusion of coping strategies is prominent in the research agenda of various domains. it requires a deep understanding of how individuals perceive risks and communicate about the effectiveness of protective measures, highlighting learning and social interaction as the core mechanisms driving such processes. methodological approaches that range from purely physics-based diffusion models to data-driven environmental methods rely on agent-based modeling to accommodate context-dependent learning and social interactions in a diffusion process. mixing agent-based modeling with data-driven machine learning has become popularity. however, little attention has been paid to the role of intelligent learning in risk appraisal and protective decisions, whether used in an individual or a collective process. the differences between collective learning and individual learning have not been sufficiently explored in diffusion modeling in general and in agent-based models of socio-environmental systems in particular. to address this research gap, we explored the implications of intelligent learning on the gradient from individual to collective learning, using an agent-based model enhanced by machine learning. our simulation experiments showed that individual intelligent judgement about risks and the selection of coping strategies by groups with majority votes were outperformed by leader-based groups and even individuals deciding alone. social interactions appeared essential for both individual learning and group learning. the choice of how to represent social learning in an agent-based model could be driven by existing cultural and social norms prevalent in a modeled society. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 when facing risks, people go through a complex process of collecting information, deciding what to do, and communicating with others about the effectiveness of their actions. social influence may interfere with personal experiences, making peer groups and group interactions important factors. this is especially important in understanding disease diffusion and the emergence of epidemics, as these phenomena annually take thousands of lives worldwide [1] . hence, good responsive and preventive strategies at both the individual and government levels are vital for saving lives. a choice of strategy depends on behavioral aspects, complex interactions among people [2] , and the information available about a disease [3] . perceiving the risk of an infectious disease may trigger behavioral change, as during the 2003 sars epidemic [4] . gathering information and experience through multiple sources is essential for increasing disease risk awareness about the disease and taking protective measures [5] . to help prevent epidemics, we need advanced tools that identify the factors that help spread of information about life-threatening diseases and that change individual behavior to curbs the diffusion of disease. various scientific approaches have been developed to tackle this challenge. network science is prominent in studying how epidemics propagate and how different awareness mechanisms can help to prevent the outbreak of disease. some researchers propose a framework with different mechanisms for spreading awareness about a disease as an additional contagion process [6] . others model populations as multiplex networks where the disease spreads over one layer and awareness spreads over another [7] . the influence of the perception of risk on the probability of infection also has been studied [8] . several recent studies have shown how information spreads in complex networks [9, 10] . however, a different approach is needed to account for individual heterogeneity (such as income and education levels), the richness of the information on social and spatial distance or media influence. here, a combination of modeling with data-driven machine learning becomes particularly attractive. simulation tools are commonly used to assess the effects of policy impacts in the health domain [3, 11, 12] . among the models for policy-making, agent-based modeling (abm) is recommended as the most promising modeling approach [13] . abm studies the dynamics of complex systems by simulating an array of heterogeneous individuals that make decisions, interact with each other, and learn from their experiences and the environment. the method is widely used to analyze epidemics [14] [15] [16] [17] . its advantage is in analyzing the factors that influence the spread of infectious diseases and the actions of individual actors [18] . as a bottom-up method, abm integrates micro-macro relationships while accommodating agents' heterogeneity and their adaptive behavior. it ensures that the interaction between the spatial environment and the behavior agents can integrate a variety of data inputs including aggregated, disaggregated and qualitative data [19] [20] [21] [22] . two processes are essential in representing agents' health behavior and disease dynamics, the evolution of risk perception, and selection of a coping strategy. hence, the core of a disease abm lies in defining the learning methods that steer these two processes. sensing of information (global, from the environment, and social, i.e., from other agents), exchanging information (i.e., interactions between agents), and processing of information (i.e., decision making) are critical. machine learning (ml) techniques can support these three elements and offer a more realistic way to adjust agents' behavior in abm [23] [24] [25] [26] . as more data become available in the analysis of the spread of disease, supporting abm with data-driven approaches becomes a prominent research direction. ml has the potential to enhance abm performance, especially when the number of agents is large (e.g., pandemics) and the decision-making process is complex (e.g., depending on both past experience and new information from the environment and peers). ml approaches in abm can provide agents with the ability to learn by adapting their decision-making process in line with new information. people make decisions both as individuals and as members of a group who imitate the decisions taken by the group or its leader [27] . information about social networks is becoming increasingly available, e.g., through social media analysis. it may reveal collective behavior in various domains, including health [28] . for example, people are not entirely rational and imitate others in their views about vaccines [29] . many abms rely solely on the decisions of individuals, paying little attention to group behavior [30] . yet, mirroring emotions, beliefs, and intentions in an abm with the collective decision making of crowds affects social contagion in abms [31] . agents-individuals and groups-may learn in isolation or through interactions with others, such as their neighbors [32] . in isolated learning, agents learn independently, requiring no interaction with other agents. in interactive learning, several agents are engaged in sensing and processing information and communicating and cooperating to learn effectively. interactive learning can be done in multiple ways, i.e., based on different social learning strategies [33] . agents might be represented as members of local groups, learning together and mimicking behavior from other group members (i.e., collective learning) [34] . yet, the impact of different types of interactive learning in groups compared to learning by an individual is an under-explored domain in the development of abms of socio-environmental systems. this article examines the influence of individual vs group learning on a decision-making process in abms enhanced with ml. to illustrate the implications of individual and collective intelligence in abms, we used a spatially explicit disease model of cholera diffusion [35] as a case study. bayesian networks (bns) steer agents' behavior when judging on risk perception (rp) and coping appraisal (ca). we quantitatively tested the influence of agents' ability to learn-individually or in a group-on the dynamics of disease. the main goal is, therefore, methodological: to introduce ml into a spatial abm with a focus on comparing individual learning to collective learning. the added value of the analysis of alternative implementations of learning in abms goes beyond the domain of disease modeling. it illustrates the effects of individuals learning and collective learning on the field of abms of socio-environmental systems as a whole. therefore, our main objectives are to (1) simulate the learning processes of agents on a gradient of learning from individual to collective, and (2) understand how these learning processes reveal the dynamics of social interactions and their emergent features during an epidemic. to address these objectives, the article aims to answer the following research questions: (rq1) what is the impact of social interactions on the perceptions and decisions of intelligent individuals facing a risk? (rq2) how do different implementations of group learning-deciding by majority voting vs by leaders-impact the diffusion process? (rq3) what are the implications of implementing collective learning for risk assessment combined with individual coping strategies? by answering these methodological questions for our case study, we reveal whether individuals perform better than groups at perceiving risks and at coping during epidemics. to explore the implications of intelligent learning on the gradient from individual to collective, we advance the existing cholera abm (cabm) originally developed to study cholera diffusion [35] . in cabm, mls steer agents' behavior [23, 35, 36] , helping them to adjust risk perception and coping during an epidemic outbreak. for this study, we ran eight abms to test various combinations of individual and group learning, using different information sources-with or without interactions among agents-as factors in the bns. we investigate the extent to which the epidemic spreads, depending on these different learning approaches regarding risk perception and coping decisions. s1 appendix provides a technical description of the model and the mls. below we briefly outline the processes in cabm essential to understand the performed simulation experiments. nowadays, 69 countries worldwide are labeled as cholera-endemic, with 2.8 million cases each year leading to 91,000 deaths [37] . people in urban slums and refugee camps are at high risk of cholera because of limited or no access to clean water and adequate sanitation. cabm is an empirically and theoretically grounded model developed to study the 2005 cholera outbreak in kumasi, ghana [35] . the open-source code for the model code is available online. cabm is grounded in the protection motivation theory (pmt) in psychology [23, 38] . the empirically-driven bns model a two-stage decision process of people facing a disease risk: learning to update risk perceptions (threat appraisal, bn1 in fig 1) and making decisions about how to adapt their behavior during the epidemic (coping appraisal, bn2 in fig 1) . according to pmt, threat appraisal depends on individual perceptions of the severity of the disease (evaluating the state of the environment and observing what happens to others) and one's own susceptibility. the coping appraisal is driven by the perceived response efficacy (the belief that the recommended behavior will protect) and one's own self-efficacy (the ability to perform the recommended behavior). cabm simulates individuals who are spatially located in a city. these agents differ by income and education level. individual agents form households and neighborhood groups and are susceptible to cholera at the beginning of the simulation. cabm implements an adjusted seir model [39] as explained in fig 2 below . instead of going directly from susceptible to exposure, we introduced an awareness component in which agents can assess their risk. options included: no risk perception in which the agent will be exposed (arrow 1, fig 2) ; no risk perception yet no exposure (arrow 2, fig 2) ; and risk perception leading agents to the coping phase (arrow 3, fig 2) . exposure to cholera takes place through the use of unsafe river water. agents can influence their exposure by selecting alternative water sources. these alternative water sources can either reduce their exposure to zero (arrow 5, fig 2) or have no effect on their infection risk (arrow 4, fig 2) . their actions are contingent on income and education levels, as well as on the information that they retrieve from their own experience, information received from others, or observations of the environment. it is not possible to judge by sight whether surface water is infected with cholera, but the agents use other types of visual pollution, e.g., floating garbage, as a proxy. when household agents find the visual pollution level too high, they may decide on an alternative. household agents with high incomes do not take a risk and will buy safe water. in cabm, the risk perception was updated using bn1, which depends on the agent memory (me), the visual pollution at the water fetching point (vp), and the evidence of the severity of the epidemic based on communication from the media (m) and potentially with neighbor households (cnh). media broadcast news about cholera starting on day 21 onward (see: ghana news archive). during a simulation, household agents may also interact with their neighbors zero to seven times a day (applied randomly) [40] . when interactive learning was activated, social interactions among household agents helped to share information on cholera cases that occurred in their communities and on the effectiveness of coping decisions. if risk perception was positive (bn1 returns a value above 0.5), household agents activate bn2 to decide which action (d1 -d4, fig 1) to take given their income (i) and education (e) level, the experience of their own household with cholera (oe), and possibly their neighbors' experiences with cholera (ne) [22] . s1 appendix provides further details on how the bns are implemented, together with tables of the parameters. sensitivity analysis of the aggregated model dynamics on the bns inputs and training alternatives can be found in [23, 36] . a feeling of risk among individuals is fueled by the type of information, the amount of information communicated, and the attention to specific information that may trigger fear and stimulate a learning process regarding a new response strategy [41] . gained information helps individuals (i) to estimate the severity of the emerging event, (ii) to assess the probability of being exposed to infection, and (iii) to evaluate the efficiency of their coping responses. we used a complex network approach to illustrate the gradual processes from individual to collective learning in cabm (fig 3) . each stage is presented as a single network over which a given learning process spreads. each network in fig 3 had the same set of nodes and connections to show how different processes can lead to different outcomes in the same network structure when different information is used to make decisions. in individual learning (fig 3, process 1a and process 1b), agents depend on their prior knowledge (memory, experience, and/or the perceived risk of the environment, such as visual pollution). such learning is the process of gaining skills or knowledge, which an agent pursues individually to support a task [42] . group learning is the process of acquiring new skills or knowledge that is undertaken collectively in a group of several individual agents and driven by a common goal [32] . group learning can be realized by making all group members use their own ml algorithms to gather information to perform a specific sub-task (decentralized), and then pool their opinions collectively by making one decision for the entire group (fig 3, process 2a and process 2b). here, we adopt a "majority vote" as the resolution mechanism in the decentralized group decision-making. however, group learning can also be realized by introducing a single agent (leader) who uses ml to learn for the whole group to help it accomplish its group task (centralized). in the centralized group learning, agents in the group copy the decisions of their leader. in both cases, all agents that belong to a group share the same decision, but the information on which this decision is based on varies considerably (fig 3, process 3a and process 3b). both individuals and groups may learn by either by taking information from their social networks (i.e., have it as an additional source of information in their ml algorithms) or not. when individual agents are isolated learners (fig 3, process 1a) , they do not have a social network but use only their own information to make a decision in an isolated environment using the information they possess. when individuals learn in an interactively (fig 3, process 1b) , https://doi.org/10.1371/journal.pone.0226483.g003 they gain new skills or knowledge by perceiving information, experience, and the performance of other agents through their social network. like individual agents, groups can also learn in isolation or interactively. in isolated learning, agents learn independently within their groups, without exchanging any information with each other or with their neighbors (fig 3, process 2a and process 3a). in interactive learning, agents communicate with their neighbors to learn effectively within their groups (fig 3 process 2b and process 3b). neighbors could be members of the same group or belong to other income/education groups but live in the same community and share the water collection points. therefore, there might be communication across the groups (fig 3, process 2b and process 3b). groups can be defined in different ways and at different hierarchical levels. this model uses three levels of an organization, the individual agent, groups of agents and communities that comprise several groups. in cabm, household agents living in the same community are grouped based on their income and education level since their coping behavior depends on these factors. agents' behavior in the disease abm also is contingent on their geographic location. hence, all neighbors that share the same water fetching point may contact and exchange information between their groups in cabm. the size and compilation of the groups impact the results of the different learning strategies. when applying interactive learning, a group's decision can be influenced by information retrieved from neighbors inside the group and neighbors outside the group but inside the community. for interactive groups, process 2b (fig 3) shows a situation in which individual household agents make decisions that account for interactions in their social networks (as in process 1b). then each household conducts a majority vote, allowing it to proceed with the option chosen by the majority of its members. process 3b (fig 3) shows a situation in which the leaders of each group make decisions based on their interactions with others (nodes a, b, and c are leaders of groups g1, g2, and g3 respectively in fig 3) . the decisions of group leaders are adopted by the household agent of the group. we designed eight simulation scenarios to answer the research questions about the influence of isolated vs interactive individual learning (rq1); centralized vs decentralized learning in processes-during both the risk perception (rp, bn1) and coping appraisal (ca, bn2) processes (rq2); and collective learning about risk perception combined with individual coping appraisal (rq3) on the dynamics of the epidemic and the performance of the model (table 1) . we systematically vary cabm settings following the steps in fig 4 to change the gradient of intelligent learning (steps 2 and 3) in different cognitive stages corresponding to our decisions of interest: risk and coping appraisal (step 1). table 1 shows the setup of the eight scenarios that reflects the three stages shown in fig 4. the area of the case study captured in cabm is 19.2 km 2 and comprises of 21 communities. we assumed that high-income households bought water, so they were excluded from intelligent learning. communities can have up to four groups based on their income and education levels. ten to fifteen percent of the household agents in the case study area usually fetch water from the river. two communities in our dataset (#11 and #20) hosted only high-income households, so they were excluded from the intelligent learning. hence, we simulated 76 groups spread over 19 communities. each simulation was run for 90 days with a time step equal to one hour. given the inherent randomness of abms, we ran each model for 100 times, generating a new synthetic population every 10 runs. besides the extensive gis data and aggregated data on disease dynamics, we ran a survey via a massive open online course (mooc) geohealth in two rounds (2016 and 2017) to gain data on individual behavior. the participants-primarily students from developing countrieswere introduced to the problem of cholera disease, saw pictures of water, and were asked if they would use the water as it is (d1 in fig 1) , walk to a cleaner water point (d2), or use the water after boiling it (d3). the survey data were used to construct and train our bns [36] . we also used these data to evaluate the results of expert-driven bns in cabm [43] . table 2 shows that trust in boiled water was much higher than trust in un-boiled water. agents also changed their behavior and began boiling water in the model. to evaluate the impact of individual and social intelligence on agents' learning processes regarding risk perception and coping appraisal and the resulting patterns of disease spread, we used four output measures: disease diffusion, risk perception, spatial patterns, and model performance. these aspects are described in more detail in the odd protocol (s1 appendix). we also measured the performance of models m1 -m8 in terms of run time and the number of intelligent decision steps, i.e., when agents called their bn1 and/or bn2. given the stochastic nature of abms, we ran each of the eight models 100 times. the average and standard deviations of the results of these runs for each output measure were listed in table 3 . behavioral changes can lead to different duration times of the epidemic and reduce the number of infected cases [44] . this was shown by running cabm with the eight models. models m5, m6, m7, and m8 recorded a longer duration of active infection during the epidemic (75-79 days, table 3 ). these results are closer to the real duration of the epidemic in 2005 (75 days, table 3 ). m5, m6, and m8 applied centralized learning, while m7 applied decentralized learning, but only for the risk perception stage. m2, which is individual learning with social interactions, also recorded a shorter duration when compared to the real data of 2005 (68 days in m2). however, isolated learning and decentralized learning for both risk perception and coping appraisals recorded shorter epidemic duration, with an average difference of -25% compared to the empirical data (table 3) . all eight scenarios generated more infected cases than the empirical data. this was because infection with cholera bacteria leads to a clinical spectrum that ranges from asymptomatic cases to symptomatic cholera cases. asymptomatic cases are not reported, although they represent roughly half of all cases [45] . in our simulations, we did not differentiate between symptomatic and asymptomatic cases; we considered all infected cases are considered to be symptomatic cases. therefore, following [45] , in table 3 , we reported that 57% of the total infected cases occurred when running the eight models. m8, which uses centralized learning for risk perception and individual interactive learning for coping appraisal, reported the fewest infected cases (2,107 against 1,621 in reality). this was followed by m2 (individual social learning) with 2,279 cases and m1 (individual isolated learning) with 2,457 occurrences. these three values reflect the fact that when household agents learned to cope and make decisions individually, they were more efficient than when they were in groups. when these decisions were combined with social interactions, they lead to better protection (m2 and m8). in general, group behavior had a negative effect, although centralized groups had a less negative impact compared to decentralized ones. finally, in m7, where household agents learned risk perception in decentralized groups and learned to cope individually, 2,911 infected cases were recorded (table 3) . hence, cabm household agents' engagement in decentralized groups for appraising disease risk hindered the perception of risk, lowering agents' motivation to change their behavior to more protective alternatives. the spatial distribution of infected cases (spi) of m8 reported the closest spi over the communities (0.75) compared to 1 in the empirical data. this was followed by m5, with 0.7 ( table 3 ). the spatial patterns of the two collective learning models (m8 and m5) reflected their similarity to the spatial patterns in the empirical data. the correlation between the peak of the epidemic and the peak of risk perception reflects the responsiveness of the household agents' risk perception of the epidemic. scenarios m2, m5, m6, and m8 were more responsive. that is, the peak of risk perception in m2 came three days after its epidemic peak, and the peaks in m5, m6, and m8 came seven days after their epidemic peaks (table 3) . m1, m3, m4 and m7 showed peaks for risk perception near the end of the simulation time. individuals in m1 were isolated, along with individuals in m3; therefore, they kept following their usual behavior of fetching water and using it as it is. in m4 and m7, household agents depended on majority votes in their groups to make their decisions on risk and to change behavior. more explanations are represented visually in the next sections. table 4 shows the number of steps and the time required to run one simulation of each model. the number of agents that were supposed to go for risk perception daily was 15% of the total number of household agents (which totaled 8,500). this percentage was derived from national statistical data from ghana statistical services [35] . over the 90 days of the epidemic, 114,750 agents appraised their risk perception (use their bn1). table 4 also shows the number of steps, during which agents perceived the risk of disease (i.e., risk perception equals 1). notably, in m3 -m8, if a group at large assessed the risk perception as zero, then none of its members did the coping appraisal, i.e., the number of steps when bn2 was activated is zero. in such cases, only the total number of steps with activated bn1 assessing risk perceptions was included in table 4 . models with centralized learning required the shortest computation times (table 4) . for example, m5, where only the isolated leaders with the centralized learning consult their bns, had the best performance with the shortest runtime. moreover, m5 and m6 recorded the fewest steps across all models. although the average number of agents with risk perception per simulated day was high (410 agents), there were only 6,840 steps in risk perception and the same number of steps when coping appraisal mls were activated. that is, only leaders activated their bn1 and bn2. this is only 22% compared to what it would be if agents decided individually. without voting, only one agent per group assessed the situation and made decisions. this made m5 and m6 time-efficient. on the opposite end, m4 recorded the highest computational time because of the intensive calculations required in the individual agents' network and the decentralized group network. among all models, m4 recorded the longest process time. agents individually perceived risk (bn1) before going back to their groups to negotiate a final decision on risk perception and then repeating the same individual-group sequence for the coping appraisal. in models m5, m6, and m8 only one agent per group-a total of 76 leaders-assessed risk perception daily, leading to 6,840 steps over the 90-day epidemic. in m5 and m6 only the 76 leaders also went for coping appraisal, while in m8, the group members individually assessed the coping appraisal (26,370 steps in m8 vs 6,840 steps in m5 and m6). calibration of the original model was conducted in two steps: first the hydrological submodel was calibrated, followed by a calibration of the complete model [35] . after the calibration, a stability check was performed [35] . for the current work, the objective of this epidemic model is not to reproduce the real data. it is focusing on the impact of social interactions (present or not, on the level of individual or groups) on both risk perception and coping appraisal of the individual agent. to calibrate a scenario further, one would need risk perception data for that area for the duration of the epidemic. however, such data are very scarce, not only for kumasi but worldwide. hence, risk perception was randomized at initialization. therefore, the eight models cannot be calibrated individually because they need to be comparable at initialization. s2 appendix shows the statistical analysis that was performed on the output data of the eight models to show and analyze the distribution of the obtained results. when household agents evaluated the risks of getting cholera and made coping decisions individually (m1), they relied only on their own experience. that is, each had individual bn1 and bn2 and did not communicate with neighbors. scenario m2 extends this stylized isolated benchmark case by assuming that while agents continued to make decisions individually, they table 4 risk perception and behavioural change during epidemics did share information with neighbors about the perception of risk and protective behavior. that is, both bn1 and bn2 included neighbors' experiences among the information input nodes. fig 5 shows the epidemic curves and the dynamics of risk perception for all scenarios. in the absence of social interactions, more agents became infected with cholera. the peak of the epidemic curve in m1 (in-i) is higher than m2 (in-n), leading to 11% more cases of disease ( fig 5 and table 3 ). overlaying risk perception and epidemic curves suggests that when agents made decisions in isolation (m1: in-i), the dynamics of risk perception were hardly realistic (fig 5a) . namely, when the epidemic was at its peak, household agents in m1 responded very slowly, with bn1 delivering a wrong evaluation of risk perception (fig 5a) . they became aware of the risks very late, so when the epidemic vanished, the number of agents with risk perception = 1 kept increasing. in the absence of communication and experience sharing among peers (in-i), the information about disease spread slowly and there was a significant time-lag between the occurrence of the disease and people's awareness. the small stepwise increase, around day 21, was because the media started to broadcast information about the epidemic on that day. in m2, household agents behaved according to the expected pattern: risk perception became amplified by media coverage and social interactions and then vanished as disease cases became rare (fig 5b) . only those who experienced cholera infection in their households remained alert. household agents in m2 after day 21 had more responses to the media's news compared to isolated agents. media supported the agents' social interactions with their risk perception and behavioural change during epidemics neighbors, which led to more agents perceiving risk, especially when the number of infected cases reached their peak (fig 5b) . even in m2, there were limitations of making decisions about risk perceptions individually: risk perception fell too quickly, implying that people stopped worrying about the epidemics although they continued. since household agents in m1 did not have interactions with other agents, running this model required less time than m2 (creating a 10% increase in performance, table 3 ). the interaction between household agents required time to process the information exchanged between agents. in addition, (m1: in-i) and (m2: in-n) were approximately the same in terms of the realistic spatial distribution of infected cases over the communities, with values of 0.65 and 0.66, respectively (table 3) . fig 6 presents the spatial distribution of decision types over the study area in both m1 (in-i) and m2 (in-n). the household agents in isolated learning were not aware of the cholera-infected cases in their neighbors' household. household agents in m1 took an unsecured decision and trusted more in using the water fetched from the river as it is (d1 in fig 6a) . household agents in m2 were more rational and mostly boiled the water that they fetched from the river (d3 in fig 6b) . in decentralized learning, groups of household agents vote for risk perception and coping appraisal. the final decision of the group is the output of the majority votes. thus, all group members follow the final decision of the group. these groups represent the democratic system, which depends very much on the composition of the group. the decentralized groups with a majority vote can lead to a negative perception of risk. besides, a coping appraisal that depends on a majority vote can lead to inappropriate decisions regarding protection from cholera. when individuals are engaged in social groups, their behaviors are not independent anymore https://doi.org/10.1371/journal.pone.0226483.g006 [46] . this leads to an increase in the randomization of decentralized learning models (m3 and m4). these two models had higher standard deviations in all measures ( table 3) . the qualitative patterns of the three scenarios (m3, m4, and m7) were the same regardless of the social interactions that added new information to ml (fig 5) . for the development of the disease, the voting mechanisms seemed to overwrite individual judgments. the m3 scenario assumes that household agents were isolated when performing risk perception and coping appraisals. in contrast, m4 and m7 allowed household agents to communicate with neighbors during the process of risk perception and before making a coping decision. as a result, m4 and m7 generated greater risk perception than m3 (fig 5c, 5d and 5e ). this suggests that the social interactions still amplify both the awareness of risks and the diffusion of preventive actions. given approximately the same peak heights, the epidemic curves in the three majority voting scenarios reported more infected cases than the other models. among the majority votes, m7 reported the fewest infected cases, since household agents in their coping appraisal relied on themselves rather than their decentralized groups. overall, it seems that all three models-m3, m4, and m7 -got the process of disease risk evaluation wrong. in those cases, risk perception slowly grew in the days when the epidemic was peaking (fig 5c, 5d and 5e) and did not react to the peak in any way, which is unrealistic. moreover, risk perception in the three models continued to grow when the epidemics were almost over. risk perception peaked when there was no longer a risk, i.e., in the last days of the simulation, as shown in table 3 . hence, group voting on risk perception operated with a major time lag: household agents ignored early signals of disease that occurred in just a few households. then they increased their awareness about risk only when most of them were already infected, and they continue to be falsely alerted when the epidemic was over. in m3, the small stepwise increase in risk perception represents the response to media, and it is similar to m1 (in-i) in its development (fig 5c) . the household agents in their decentralized groups did not have contact with neighbors, therefore, no cases were reported to them from their neighborhoods. as such, they were disconnected from what is happening around them. in m4 and m7, which included social interactions, the development of risk perception seems more responsive, especially after the activation of media on day 21. nevertheless, their response time was still slow (fig 5d and 5e ). in these models, the group decisions were very much dependent on the composition of the group members' opinions. these varied from one another and had different information sources for the final decisions about risk perception (in both m4 and m7) and coping appraisal (in m4). thus, majority voting led to unsecured decisions. groups in these models were heterogeneous in that household agents had different levels of exposure to the group members with which they voted. decentralized groups with isolated input information (m3) led household agents to vote to use the water fetched from the river (d1) most of the time (fig 7, map a) . because of their lack of communication with neighbors, household agents missed the opportunity to get information about the infection in their neighborhoods. this explains the higher numbers of infected cases in the majority vote models. social interactions in both m4 and m7 helped agents make better decisions, although following the majority still biased their choices. for instance, in m4 high-income communities (upper communities in maps b and c, fig 7) , household agents mostly used the river water as it was even though they were rich enough to boil it before using it (d3) or to buy bottled water (d4). the opposite also occurred when a majority vote forced low-income households to buy bottled water, which is an expensive decision for them. the group voting on the coping appraisal in m4 might have made individual members uncomfortable when they followed the decisions of their groups even though they might not protect. in reality, household agents sought a balance between preventive behavior and their capability to implement it. moreover, there is always the possibility of routinely changing one's mind based on daily updates of information regarding the epidemic and updates from neighbors. as in m4, the household agents in m7 relied on their decentralized groups for risk perception. this, often led to risk ignorance (fig 5e) . however, since the agents in m7 decided on coping appraisals individually, more agents adopted d1 (fig 7c) . when they perceived risk during the last days of the epidemic, household agents in the middle-income level switched to boiling water or buying bottled water (d3, d4 in fig 7c) . those in the low-income level walked to another water fetching point (d2). in centralized groups, one household agent is randomly selected to be the group leader. the leader is responsible for risk perception and the coping appraisal of the group. group members copy the risk perception and disease preventive decisions of their leaders. it is argued that group leaders may improve their group's performance if they model the responses to the situation the group faces [47] . in this article, we considered two types of leaders: a dictator making top-down decision about risk perception and coping strategy (m5 and m6), and an opinion leader evaluating risk perception top-down but giving group members the freedom to pursue their own disease coping behavior (m8). the qualitative trends of all three models coincided with what is expected: peaks caused by amplification of risk perception followed by a gradual decrease when epidemics plateau (fig 5f, 5g and 5h) . the centralized group learning on average represented the processes well, as the leader alerted the group members about the disease. however, since no real data are available on risk perception dynamics or the actual coping behaviors that people pursued during the epidemic, we cannot determine which of the models m5, m6, and m8 is the best. the following subsections compare models with a leader-dictator (m5, and m6) to one with an opinion leader (m8). a dictator-leader decides on behalf of his group regarding disease risk and coping strategies, and both decisions are adopted top-down. a dictator leader learns either in isolation (m5) or in interaction with her/his neighbors (m6). isolated dictators in m5 are overestimated disease risks (fig 5f) . for example, if such a leader had his/ her own bad experience with cholera, s/he would keep warning the group. with social interactions (m6), there is less uncertainty in the process of updating the risk perception than in m5. for example, compare risk perception assessments around the epidemic peak (fig 5g) . fig 8 illustrates the impact of social interactions on the dictator's decisions regarding coping appraisal. isolated leaders guided their groups to various types of decisions (fig 8a) , which were sometimes less secure decisions (e.g., d1). with social interactions, leaders relied on their neighbors and decided more often to walk to a point along the river where the water was cleaner (d2). very few dictators directed their groups to boil the fetched water (d3) or buy bottled water (d4) (fig 8b) . this shows how centralized decisions making undermines heterogeneity in individual circumstances, such as disease exposure or coping capacity. in m8, the leaders in the centralized groups were responsible for evaluating disease risks for their groups, but they interacted with neighbors during the risk perception process. for the coping appraisal, the group members made their own decisions, using the information from their social networks. as a result of this combination of centralized speed alertness about risk perception and individual coping strategies, m8 generated the fewest infections. the shape of the epidemic curve (except for its height) is very close to the empirical data of 2005, (fig 5h) . as in m6, the uncertainty in the process of risk perception in m8, is lower than in m5 (fig 5h) . the risk perception curve developed around the epidemic peak followed the dynamics of the epidemic (fig 5g) . when group members relied on social interaction to learn about the effectiveness of various coping strategies but eventually chose one themselves (m8), there was a diversity of coping strategies. fig 8c shows the spatial distribution of different types of decisions during the simulation. more household agents went for d3 and d4, which were considered to be the most protective decisions. consequently, communities pursued at least three types of decisions, reflecting the disease coping diversity so important for resilience. the goal of this paper is to perform a systematic comparison of individual vs group learning. the methodological advancements showed that different implementations of individual and collective decision-making in agents' behavior led to different model outcomes. in particular, the stepwise approach of testing how learning (on a gradient from individual learning-without any interactions-to collective-with social networks) affects an abm's dynamics is generic and can be used for other models. to illustrate the subtle difference in implementing learning in abms, we used the example of the spatial empirical abm of cholera diffusion with intelligent agents that employ ml to assess disease risk and decide on protective strategies, which define the dynamics of the epidemic. interactive learning, which assumes that agents share information about risks and potential protective actions, outperformed isolated learning both for individuals and in groups. this underlines the fact that social learning in the decision-making process is very important in abms. while we used disease modeling as a case study, the results may be contingent on the endogenous dynamics of this particular cholera abm. notably, simulation results may differ for abms with other underlying dynamics. this calls for further scrutiny in testing and reporting cases of intelligent social and individual learning in other models. the results indicate that decentralized groups with majority votes are less successful than groups with leaders, whether dictators or opinion leaders. when evaluating current disease risks, majority voting appears to be the worst mechanism for group decisions, often arriving at a wrong decision because of time lags compared to the dynamics of objective disease risks. perceiving risk is a very personal decision-making process [48] . in contrast, when leaders develop risk perception and propose it to the group, such groups perform better in terms of risk appraisal. moreover, opinion leaders are very effective in helping their group members be alert about disease while giving them the freedom to make coping decisions that accommodate heterogeneity in their socio-economic status and geographical locations. in contrast, dictatorleaders and majority votes that impose a decision that all group members must follow are less effective in reducing the incidence of disease. in our simulation experiments, the structure of the groups is simple and is formed based on the spatial and socio-demographic characteristics of the agents. as grouping seems to have an impact on the spatio-temporal diffusion of the disease, the importance of disease modeling stresses the fact that for this type of model a careful evaluation of the social structures in the case study area should be conducted, to generate trustworthy results. future research should focus on constructing groups based on different variables (family ties, religion, tribes). also, in our abm the leaders had no particular knowledge but were randomly selected and assigned to groups. in reality, this may not be the case. leaders may have access to better information or have already earned the group's trust and respect. in addition, decentralized groups can be improved by giving greater weights to more trusted partners to make wise decisions. the model's performance can be a strong argument when the number of agents is massive, e.g., when simulating a pandemic or epidemics within a very large population is needed to detect a worldwide diffusion mechanism. in that case, social group learning, as described in model m5, is a very good alternative to individual interactive behavior. moreover, m5 shortens the computation time by 73% while maintaining a good quality model output. the number of contacts each household agent has when they are in their collective learning may impact the diffusion of cholera. however, running a fat tail distribution of the number of contacts would be an interesting topic for future study. different considerations steer the ultimate decision on which type of social behavior to use. besides the technical model performance metrics discussed here, the choice of a particular type of social behavior can also be based on the society that is being modeled. different political systems, the presence of tribes, and different ethnic groups or religious leaders require careful considerations of the social interactions in a model. one should make sure that the actual situation regarding social learning represents the cultural and social norms of the society being modeled. in this article, it was not possible to define, which implementation (m1 -m8) represented the situation in kumasi most closely. to validate the risk perception-behavior, one would need risk perception data for that area for the duration of the epidemic. however, such data are very scarce, not only for kumasi but worldwide. as we illustrated in this study, many different implementations of social behavior using ml are technically possible, but data are needed to validate alternative implementations. yet, research on risk perception during epidemics is often conducted too late (when the peak is over) or at distance (not in the area where the disease spreads). hence, researches provide little empirical proof of people's behavior and risk perception. more research on risk perception during epidemics, including other variables such as cultural aspects and group behavior, can be very helpful in generating a model that represents a specific society realistically. on a technical note, agent-based modeling software does not always include ml toolkits and libraries. this complicates the implementation of different types of social intelligence. hence, better integration of abm and ml in one software package or linkable libraries could eliminate this problem in the future. finally, an important direction of future research is to implement other ml techniques besides bns, such as decision trees and genetic algorithms. in addition, modeling groups with different ml algorithms may lead to different results since groups will be heterogeneous in terms of members' learning algorithms. several developments in health research drew our attention to the implementation of learning in disease models. one is the impact of fake news on the behavior of people. the other is the fact that human behavior toward vaccination can change radically based on (fake) news it. therefore, including these factors and testing their impact on the behavior of agents may lead to more conclusions for policymakers to consider in their efforts to control epidemics. managing epidemics: key facts about major deadly diseases. world heal organ learning from each other: where health promotion meets infectious diseases modeling infection spread and behavioral change using spatial games severe acute respiratory syndrome epidemic and change of people's health behavior in china the role of risk perception in reducing cholera vulnerability towards a characterization of behavior-disease models dynamical interplay between awareness and epidemic spreading in multiplex networks epidemic spreading and risk perception in multiplex networks: a self-organized percolation method spreading processes in multilayer networks interacting spreading processes in multilayer networks epidemic and intervention modelling-a scientific rationale for policy decisions? lessons from the 2009 influenza pandemic wrong, but useful": negotiating uncertainty in infectious disease modelling models for policy-making in sustainable development: the state of the art and perspectives for research using data-driven agent-based models for forecasting emerging infectious diseases out of the net: an agent-based model to study human movements influence on local-scale malaria transmission modelling the transmission and control strategies of varicella among school children in shenzhen a taxonomy for agent-based models in human infectious disease epidemiology an open-data-driven agent-based model to simulate infectious disease outbreaks modeling human decisions in coupled human and natural systems: review of agent-based models spatial agent-based models for socio-ecological systems: challenges and prospects agent-based models global sensitivity/uncertainty analysis for agent-based models intelligent judgements over health risks in a spatial agent-based model a multi-agent based approach for simulating the impact of human behaviours on air pollution simulating exposurerelated behaviors using agent-based models embedded with needs-based artificial intelligence agent-based modeling in supply chain management: a genetic algorithm and fuzzy logic approach evidence for a relation between executive function and pretense representation in preschool children scalable learning of collective behavior based on sparse social dimensions the impact of imitation on vaccination behavior in social contact networks agent based simulation of group emotions evolution and strategy intervention in extreme events modelling collective decision making in groups and crowds: integrating social contagion and interacting emotions, beliefs and intentions learning in multi-agent systems simulate this! an introduction to agent-based models and their power to improve your research practice do groups matter? an agent-based modeling approach to pedestrian egress agent-based modelling of cholera bayesian networks for spatial learning: a workflow on using limited survey data for intelligent learning in spatial agent-based models. geoinformatica the global burden of cholera a comprehensive review of the applications of protection motivation theory in health related behaviors seasonality and period-doubling bifurcations in an epidemic model social contact structures and time use patterns in the manicaland province of zimbabwe cognitive and physiological processes in fear appeals and attitute change: a revised theory of porotection motivation. social psychophysiology: a sourcebook artificial intelligence: a modern approach. third edit integrating spatial intelligence for risk perception in an agent based disease model resilience management during large-scale epidemic outbreaks susceptibility to vibrio cholerae infection in a cohort of household contacts of patients with cholera in behavioral modeling and simulation risk perception and human behaviors in epidemics risk perception it's personal. environmental health perspectives. national institute of environmental health science key: cord-016982-qt25tp6t authors: fong, i. w. title: litigations for unexpected adverse events date: 2010-11-30 journal: medico-legal issues in infectious diseases doi: 10.1007/978-1-4419-8053-3_8 sha: doc_id: 16982 cord_uid: qt25tp6t a 53-year-old iranian female who immigrated to canada about 3.5 years before was referred to an internist for a positive mantoux skin test (11 mm in diameter). the subject was previously well with no symptoms indicative or suggestive of active tuberculosis. a routine tuberculosis skin test was performed because the patient had applied to be a volunteer at a local hospital. she had no significant past illness or known allergies, and she was never diagnosed with nor had known contact with anyone with active tuberculosis. the subject never ingested alcohol and was not known to have hepatitis or be a carrier of any hepatitis virus. baseline investigations performed by the internist included routine complete blood count, routine biochemical tests (liver enzymes, creatinine, and glucose), serum ferritin, and thyroid-stimulating hormone – all of which were normal. a chest radiograph was reported to be normal. creatinine, glucose and electrolytes; but the bilirubin had risen to 219 mmol/l, the sgot was 978 m/l, the serum alanine aminotransferase (alt) was 641 m/l (normal 10-45 m/l), alp 300 m/l, and the prothrombin time 2.2 s. a repeat ultrasonography of the abdomen revealed large ascites and a liver of 13 cm in length with normal contour. over the next 2 weeks, she became drowsy and encephalopathic, and was transferred to a tertiary care hospital where a liver transplantation was successfully performed (live donor from the patient's daughter). pathology of the liver showed a markedly shrunken liver with signs of fulminant hepatitis, with negative stains for hepatitis b antigens. a lawsuit was subsequently launched by the patient (plaintiff) against the physician who prescribed the isoniazid. the statement of claim alleged the following: (1) isoniazid was directly responsible for the plaintiff's fulminant hepatitis which resulted in the need for a liver transplant, (2) informed consent was never obtained to prescribe the drug, as the plaintiff was never counseled on the adverse effects, nor given a choice of treatment, (3) use of the isoniazid was never indicated, as the patient had no symptoms or signs of active disease, (4) the physician should have realized that the positive mantoux test was due to a previous bcg vaccination as a child (the defendant was informed of this fact) and therefore there was no need to treat the plaintiff for latent tuberculosis. based on the above facts, the internist was negligent in prescribing isoniazid and he should have monitored her liver enzymes after initiation of treatment (according to the statement of claims). the lawyer for the plaintiff further stipulated that if his client were never treated unnecessarily for latent tuberculosis, she would not have suffered from fulminant hepatitis or required a liver transplant. hence, the treating physician provided substandard care and compensation was sought for pain and suffering of the plaintiff, as well as for the daughter who underwent partial hepatectomy for liver donation. the case under discussion does not fall into the high-risk category for treatment of latent tuberculosis, but may be considered as an intermediate risk on cursory assessment. although employees and staff of healthcare facilities, especially those involved in direct patient contact, should be offered treatment of latent tuberculosis, there is no such stipulation for volunteers in hospitals. most healthcare facilities screen volunteers for active tuberculosis by mantoux skin test and chest radiograph for those with positive reaction. another category under which the subject could be considered is an indication for treatment of latent tuberculosis include persons from highly endemic countries within 5 years of immigration with a positive mantoux test (10 mm), irrespective of previous bcg vaccination. this group of people represent one of the largest segment of newly diagnosed patients with active tuberculosis in north america and europe. 2, 3 in 2006, 57% of all tuberculosis cases in the united states were among foreign-born persons, 4 and in several european countries >50% of tuberculosis case occur among foreign-born people. 5 there are 22 countries with a high burden of tuberculosis (tb) that account for 80% of the tb cases globally. 6 these countries are located predominantly in asia (south east asia and western pacific regions) africa, brazil (south america), the russian federation (eastern europe), and afghanistan (middle east). the estimated new tb cases (all forms) per 100,000 people per year in iran is 22, which falls in the low risk category (0-24) as present in north america and western europe. 6 the incidence and prevalence of tb in the middle east varies from country to country, and iran actually falls into the relatively lower risk group the indication for treatment of latent tb in this case is borderline or very debatable, but most physicians (including internists) may not be aware of this fact. the treatment of choice for latent tb is now standardized to a 9-month course of isoniazid (inh) 300 mg once daily for adults, with or without pyridoxine (vitamin b6) to prevent peripheral neuritis. this is believed to be about 90% effective in preventing future reactivation of tb; but it does not prevent re-infection (with a new strain), which is a risk mainly in highly endemic countries. the main worrisome adverse effect of inh is clinical hepatitis, which can be fatal or lead to fulminant hepatitis that requires liver transplantation. there are two types of hepatic toxicity seen in inh; a common transient elevation of the transaminases seen in 10-30% of patients that occurs within 4-6 months and is benign and asymptomatic, and clinical hepatitis (symptomatic) which is much less common, age-related, and only occurs in about 1% of treated patients. clinical hepatitis with inh is rare under 20 years of age and increases to about 2-2.3% above 50 years, and in persons >65 years, the risk increases to about 4.5%. 1 about 50% of inh hepatitis occurs in the first few months of treatment and the remainder occurs later up to 12 months (if still on inh). 7 the prognosis of overt inh hepatitis is usually very good if the drugs are discontinued promptly with the first sign of clinical hepatitis. the overall mortality is about 10% or 4.2 per 100,000 patients treated with inh. 7 middle-aged black women seem to have the worst prognosis from this complication. in the majority of patients, there is clinical and biochemical resolution of signs and laboratory abnormality within 1-2 months of stopping the drug. occasionally, patients can present or develop a sub-acute, more protracted course that mimics chronic viral hepatitis and leads to cirrhosis. 7 the pathogenesis of inh hepatotoxicity was initially considered to be an idiosyncratic reaction, but there is increasing evidence that this is a direct toxic effect of metabolite(s). there appears to be a higher risk and greater severity with higher doses, and higher incidence in slow acetylators. 8, 9 animal experiments show that inh metabolism leads to acetyl hydrazine, which after oxidation forms toxic intermediates. these are thought to produce damaging effects by acetylating or alkylating macromolecules within liver cells, but the exact mechanism of liver cell injury is unknown. 7 in slow acetylators, acetyl hydrazine accumulates and predisposes to hepatotoxicity. another metabolic pathway involves hydrolysis of inh to hydrazine and isonicotinic acid. hydrazine is known to be directly hepatotoxic and hydrolysis of inh is increased by alcohol and rifampin. 9 the mechanism of agerelated hepatotoxicity is unclear, but could possibly be related to the slowing of acetylation with advancing age. most guidelines and recommendations of latent tb strongly discourage treatment with inh in patients with active liver disease. close clinical and biochemical monitoring for liver toxicity are mainly recommended for subjects with high risk for clinical hepatitis, such as older people (65 years), those with history of liver disease, chronic carriers of hepatitis b and c, alcohol abusers, concomitant users of other hepatotoxic drugs, and subjects who suffer from malnutrition or aids. current textbooks of medicine do not recommend routine biochemical monitoring for healthy adults being treated with inh. 10 in these circumstances, baseline liver tests are performed and patients should be counseled on symptoms of clinical side effects and should be monitored clinically. some experts and the manufacturer recommend biochemical monitoring for persons >35 years old, pregnant women, (and those within 3 months post-partum), monthly for 3 months, then afterwards at 1-3 month intervals. 1, 11 inh should be discontinued promptly at the first sign of clinical hepatitis. symptoms of hepatitis may include fatigue, weakness or fever >3 days, malaise, unexplained anorexia, right upper quadrant pain or discomfort, and jaundice. if the alt is 3-5 times the upper limit of normal, the drug should be discontinued, even if the patient is asymptomatic. restarting inh at a small dose has been recommended by some experts in asymptomatic patients. it is of interest to note that the american thoracic society, the british thoracic society, and the task force of the european respiratory society only recommend regular biochemical monitoring of liver function on multidrug treatment for tb in patients with chronic liver disease or increased serum transaminases prior to treatment. 12 in the case of symptoms of hepatotoxicity, the liver function should be examined. this may be based on the fact that there is no good evidence that routine monitoring of liver function will decrease the chance of fulminant hepatitis or fatality, and prompt discontinuation of medications with first onset of symptoms usually results in full recovery in those with clinical hepatitis. the defendants' lawyer raised a critical question. is it absolutely certain that the fulminant hepatitis suffered by the patient was due to isoniazid? with any serious adverse event, to make an assessment requires several steps and investigations to reach a valid conclusion. this involves a process of deduction and exclusion of other etiologies (such as hepatitis virus), other agents, and use of bayes theorem to assess overall probability (definite, probable, or possible), as well as posterior and prior probability (based on known literature reports). other considerations include temporal relationship with use of the medication, compatibility of clinical features and laboratory data, histopathology data and previous reports, and reproduction of the event by re-challenge with the putative agent. although this is the most definitive method of proving cause and effect, it is the least used because of the potential risk of harm to the patient and the ethical and moral issues. the temporal relationship, clinical features, laboratory data, and histology of the liver are all compatible with inh -induced hepatitis. however, the investigation excluded well-known causes of viral hepatitis. the patient was also receiving diclofenac, which started 5 weeks before the clinical diagnosis of hepatitis and 2-3 weeks before the onset of symptoms. thus, there is a temporal relationship with diclofenac treatment and the onset of clinical hepatitis. nsaids in general are known, but rare causes of drug-induced hepatitis. 7 the incidence of diclofenac-induced clinical hepatitis is about 1-5 per 100,000 users, and the incubation period varies from 3 to 12 weeks (consistent with the present case). 7 data from the diclofenac monograph (novartis pharmaceuticals) indicates that there is a higher incidence of moderate to severe (3-8 times upper limit of normal) and marked (>8 times normal) elevation of transaminases when compared to other nsaids. in addition, rare causes of severe hepatic reactions, including liver necrosis, jaundice, and fulminant fatal hepatitis (or requiring liver transplant) have been reported with diclofenac. to date, there is no evidence of enhanced risk of clinical hepatitis in patients receiving both inh and diclofenac or other nsaids. elderly women are more susceptible to nsaids-induced hepatitis. histopathology of the liver usually reveals zone 3 or 5 spotty acute hepatocellular necrosis, but there can be granulomas, cholestasis, hepatic eosinophilia, and even chronic active hepatitis with overuse of nsaids. 7 the prognosis is usually very good from withdrawal of nsaids. there is no evidence that concurrent treatment with inh and nsaids increased the risk or severity of hepatitis. treatment for latent tb in the case under discussion was not indicated, but the circumstances could be interpreted as representing a borderline indication to use inh. however, the patient should have been offered the choice of no treatment versus therapy for latent tb. the risk versus benefit should have been discussed and the potential side effects explained to the patient. the patient should have been counseled to discontinue the medication at the first symptoms suggestive of clinical hepatitis. monitoring for liver disturbance by biochemical tests is not routinely recommended for patients at low risk for clinical hepatitis, and the physician should not be held responsible for his failure to order these tests. clinical monitoring however is standard and the physician can be held responsible for either failure to recognize the manifestations of hepatitis, or his failure to promptly withdraw all drugs once these signs appear. it cannot be concluded that inh was irrefutably culpable for the fulminant hepatitis, but based on the relative risk and incidence, it was more likely the cause than diclofenac. in any case, both drugs should have been discontinued immediately with the first signs of clinical hepatitis. for 2 years, a 35-year-old male had suffered from recurrent bouts of nasal congestion, nasal discharge, and post-nasal drip with only partial, temporary relief from decongestants, antihistamines, and topical corticosteroids. his fp referred him to an internist and clinical allergist for further management. his past history was negative for any significant medical illness, but the patient had previous surgery for nasal septal deviation, and had stopped smoking 2 years before. examination by the allergist revealed inflamed edematous nasal mucosa with some purulent discharge, and a radiograph of the sinuses demonstrated mucosal thickening of both maxillary antra. based on these findings, the consultant made a diagnosis of chronic rhino-sinusitis with an allergic and infectious component. the consultant prescribed intranasal corticosteroids and a 2-week course of trimethoprim-sulfamethoxazole (tmp-smx). the patient reported that he was treated by his fp 2 months before with triple sulfonamide antibiotics (trisulphamine) for 7 days without any side effects. he had no known drug allergies before this visit. towards the end of the 2-week course of tmp-smx, the patient developed malaise, low-grade fever, and a body rash that started on the face and trunk. this rash rapidly progressed over the next 48 h to involve his limbs, mouth, and eyes, with blistering of the skin. he was admitted to the emergency department of a hospital with a diagnosis of sulfonamide-induced toxic epidermal necrolysis (ten). further care was performed in the burn unit. as a consequence of this adverse reaction, the patient developed bilateral corneal ulcerations requiring repeated corneal transplants. despite this, he remained blind in the left eye and had severe visual impairment on the right side. medico-legal actions were launched by the patient's lawyer claiming medical malpractice against the allergist in failing to warn the patient of the potential adverse effects of tmp-smx. moreover, the plaintiff claimed that antibiotics were never needed in the first place and if he had known of these potential side effects, he would not have agreed to be treated with the tmp-smx. the defense retorted that the adverse reaction suffered by the patient was extremely rare, and that the patient had previously been treated with sulfonamides, without any reaction. they claimed this reaction could not have been predicted and that it was not the standard medical practice for physicians to list all the rare side effects of licensed drugs on the market. the first relevant issue in this case is the following question: should any antibiotic have been prescribed? if antibiotics were indicated, was the choice of the tmp-smx appropriate? current consensus is that antibiotics are overused and prescribed unnecessarily for sinus disease. sinusitis is commonly due to respiratory viruses and allergic reaction (as in hay fever), and antibiotics are of no value in these situations. the presence of purulent nasal discharge can be seen in the above conditions, but is not diagnostic or indicative of bacterial sinusitis. 13 radiographs of sinuses showing thickened mucosa or fluid in the chambers are non-specific and not diagnostic of bacterial sinusitis, as these changes can also be seen in viral infection and allergic sinusitis. the etiology of chronic sinusitis is complex and there is a lack of consensus of the pathogenesis. multiple factors may predispose to chronic sinusitis and allergy appears to play a prominent role, with or without polyps. 13 other factors include structural abnormalities (outflow obstruction, retention cysts, etc.) and irritants such as smoking. chronic sinusitis is usually defined as having symptoms of sinus inflammation lasting longer than 12 weeks, with documented inflammation (by imaging techniques) at least 4 weeks after appropriate therapy with no intervening acute infection. 14 computerized tomography (ct) is the preferred imaging technique to identify any obstruction and polyps. although antibiotics are commonly used in chronic sinusitis, their benefits have not been established by randomized trials, and the role of bacterial superinfection has not been well-defined. 13 the best microbiological data from patients with chronic sinusitis have found aerobic (52.2%) and anaerobic pathogens (47.8%) are common in these cases. 15 the most common aerobes were streptococcus species and hemophilus influenzae (nontypable strains), and the most common anaerobes were prevotella species, anaerobic streptococci and fusobacterium species. management of chronic sinusitis is challenging and involves combined medical and surgical therapy. for surgical cases where there is good clinical and imaging evidence of chronic bacterial sinusitis, empiric antibiotics should be effective against streptococci, h. influenzae and anaerobes. amoxicillin-clavulanate would be a suitable choice, and for b-lactam allergic patients, a new fluoroquinolone with anaerobic activity (moxifloxacin) would be an acceptable alternative. 13 failure to respond usually indicates the need for surgery which can be performed by endoscopy, and in these cases, antibiotic treatment should be guided by sinus culture (by puncture or endoscopy-guided). although antimicrobials are commonly used for extended periods (3-4 weeks) for acute superinfection or exacerbation, no studies have addressed the issue of duration of therapy. although the case under discussion may not meet the diagnostic criteria for chronic bacterial sinusitis, making this diagnosis and instituting antibiotic therapy (although a judgment error) should not be considered gross negligence, or represent substandard care to merit malpractice litigation. the choice of antibiotic (even if the diagnosis of chronic sinusitis were correct), however, would not be a suitable selection. for acute bacterial sinusitis, amoxicillin/ampicillin is considered the drug of choice and tmp-smx is recommended as an alternative agent for subjects allergic to penicillin. what counseling should patients receive when prescribing an antibiotic, and specifically tmp-smx? most physicians do not spend time to inform their patients about the adverse effects of prescribed medications. on the other hand, most pharmacists do provide written information on new prescriptions. physicians cannot depend on this fact though, nor rely on this service for defense in a court of law. in most situations, physicians may counsel patients on drugs with known high risk of toxicity or side effects. for frequently prescribed medications (such as most oral antibiotics), counseling often is neglected, or only the common adverse effects are mentioned. the incidence of uncomplicated skin reaction (allergic skin rash) to tmp-smx (mainly due to the sulfonamide component) in the general population is about 1-4% of recipients. 16 this consists of mainly toxic erythema, a maculopapular eruption, infrequently urticaria, erythema nodosum, and fixed drug eruption. 16 severe skin reactions in tmp-smx recipients are rare and include steven's-johnson syndrome (sjs), toxic epidermal necrolysis (ten), exfoliative dermatitis, and necrotizing cutaneous vasculitis. previous estimates of severe skin reaction were 1 in 100,000 recipients. 16 patients with hiv infection have a much higher incidence of cutaneous reaction to tmp-smx (especially those with aids). epidermal necrolysis (en) is a rare and life-threatening reaction, mainly drug induced, which encompasses sjs and ten. these two conditions represent severity variants of identical process and differ only in the percentage of body surface involved. 17 the incidence of sjs and ten are estimated at 1.6 per million person-years and 0.4-1.2 cases per million person-years, respectively. 17 although en can occur at any age, it increases in prevalence after the fourth decade, and is more frequent in women. there is some evidence that the risk of en increases with hiv, collagen vascular disorders, and cancers. the clinical features of en are characterized by skin and mucous membrane involvement. initially, the skin reaction begins with macules (mainly localized to the trunk, face, and proximal limbs), and then progresses to involve the rest of the body and become confluent with flaccid blisters leading to epidermal detachment. 17 patients may become systematically ill with fever, dehydration, hypovolemia, secondary bacterial infection, esophageal and pulmonary involvement, and complications and death from sepsis. the pathogenesis of en is not completely understood, but studies indicate cell mediated cytotoxic reaction against keratinocytes leading to massive apoptosis. early in the process, there is a predominance of cd 8 killer t lymphocytes in the epidermis and dermis of bullous lesions, and later monocytes develop. cytotoxic cd 8 t cells express a-b tcell receptors are able to kill cells through production of perforin and granzyme b. drugs are the most important causes of en and ten and >100 different drugs are implicated. cd 8 oligoclonal expansion corresponds to a drug specific, major histocompatibility complex (mhc) -restricted cytotoxicity against keratinocytes. 17 pro-inflammatory cytokines il-6, tnfa, and fas ligand are also present in skin lesions. genetic susceptibility appears to be important, and there is strong association with han chinese with hla-b5802 leucocyte antigen and sjs induced by carbamazepine, and hla-b5801 antigen and sjs induced by allopurinol. 17 high-risk drugs (about 12) from six different classes, account for 50% of en reactions. these include allopurinol, sulfonamides, anticonvulsants (carbamazepine, phenobarbital, lamotrigine), nevirapine (non-nucleoside analog), oxicam nsaids, and thiacetazone. 18 the incubation period for en ranges from 4 to 30 days, but most cases occur within 8 weeks of starting the medication. rare cases can appear within hours of use, or same day if they had prior reaction. early, non-specific symptoms (fever, headache, rhinitis, myalgias) may precede mucocutaneous lesions by 1-3 days. some patients may also present with pain on swallowing or stinging of the eyes. about one third of patients begin with non-specific symptoms, another third with primary mucous membrane involvement, and the rest present with an exanthema. 17 progression from a localized area to full body involvement can vary from hours to days. the classification of en depends on areas of detachable epidermidis by a positive nikolsky sign (dislodgement of epidermidis by lateral pressure) and flaccid blisters. the diagnosis of sjs is made when there is less than 10% body surface area (bsa) involvement; sjs/ten overlaps with 10-30% bsa, and ten for >30% bsa involvement. 17 in severe cases of en, the mucous membranes (buccal, ocular, genital) are involved in about 90%, and 85% have conjunctival affliction consisting mainly of hyperemia, erosions, chemosis, photophobia, and excessive lacrimation. severe form of eye involvement can result in shedding of eyelashes, corneal ulceration (as in case 2), anterior uveitis, and purulent conjunctivitis. 17 extra-cutaneous complications mainly seen in severe ten may include pulmonary disease (25%) with hypoxia, hemoptysis, bronchial mucosal casts, interstitial changes, and acute respiratory distress syndrome (ards), which carries a poor prognosis. the gastrointestinal tract involvement is less common, but can include esophageal necrosis, small bowel disease with malabsorption, and colonic disease (diffuse diarrhea and bleeding). renal involvement is mainly proteinuria and hematuria, but proximal renal tubular damage can sometimes cause renal failure. late ophthalmic complications occur in about 20-75% and consist of abnormal lacrimation with dry eyes, trichiasis (ingrowing eyelashes), entropion (inversion of eyelid), and visual impairment or blindness from scarring of the cornea. prognosis of en varies with the severity of illness and prompt withdrawal of the offending agent. the overall mortality of en is 20-25%, but for sjs it is lower, at 5-12%, and higher for ten >30%. development of a prognostic scoring system (scorten) for ten, 19 has recently been found useful, but the performance of the score in prediction is best on day 3 of hospitalization. 20 the prognostic factors that are each given one point include the following: age >40 years, heart rate >120/min, cancer, or hematologic malignancy, bsa involved >10%, serum bicarbonate of <20 mm/l, and serum glucose of >14 mm/l. the mortality rate in ten increases with accumulation of points as follows: 0-1 point has a mortality rate of 3.2%, 2 points has a mortality rate of 12.1%, 3 points has a mortality rate of 35.8%, 4 points result in a mortality rate of 58.3%, and >5 points result in nearly uniform mortality of 90%. 19 management of en or ten consists of prompt removal of the offending agent and symptomatic therapy. patients with a scorten of 0-1 can be managed on the regular medical wards, whereas those with >2 points should be transferred to a burn center or intensive care unit (icu). 17 it is most important to maintain hemodynamic support with adequate fluids and electrolyte balance. central venous lines should be avoided because the risk of superinfection is high, and so peripheral intravenous access should be used. moreover, the rash and blistering is greatest proximally. nutritional support should be maintained orally or by nasogastric tube, and use of prophylactic heparin is warranted, and also an air-fluidized mattress preferable. unlike severe burns, extensive and aggressive debridement of necrotic epidermidis is not recommended. 17 there is no indication for prophylactic antibiotics, but patients should be monitored diligently for infection and treated promptly when present. there is no standard protocol for skin dressing, and antiseptic is used depending on the individualized center's experience. eye care should consist of a daily examination, artificial tears, antiseptic and vitamin a drops every 2 h. regular mouth rinse with antiseptic solution several times a day is recommended. there is no proven specific therapy for any form of en. steroids were initially considered for sjs, but their value is unproven, controversial, and they are not routinely recommended. intravenous immunoglobulin (ivig) is also very controversial, and although initial retrospective studies suggested benefit, recent prospective, non-randomized studies have not confirmed any definite value, and some studies showed increased renal failure and mortality with ivig. 21 in one of the largest studies from a single center, ivig was assessed in a prospective noncomparative study of 34 patients with en, and 20 subjects with ten. there was no evidence of improvement in mortality, progression of detachment, nor reepidermalization. most deaths occurred in elderly patients with initially impaired renal function. thus, ivig is not recommended for en unless being assessed in a randomized clinical trial. the death rate with ivig was 32%, which was higher than the historical death rate in the same center (20%), in historical controls with ten not treated with ivig. 22 thus, ivig may be harmful in patients with en. one of the issues raised by the plaintiff was that he was not counseled on the potential severe side effects of the tmp-smx, and that if he were aware of the risk, he would not have agreed to take it. is it the responsibility of the physicians to explain all potential albeit rare adverse effects of any treatment? the courts may take in consideration the standard practice of the physician's peers, or what is considered accepted practice. most physicians (if they do counsel patients on medications) would mention the most common side effects, but would not usually mention rare adverse effects. for instance, it would be justifiable to mention that a drug rash could be seen with tmp-smx, if the patient happens to be allergic to the drug (which should be discontinued as soon as this occurs). as physicians, we would not usually mention that there is a rare risk of shedding of the skin, blindness, or death. similarly, when prescribing penicillin in patients not known allergic to the drug, we generally do not counsel that there is a 1:50,000 to 1:300,000 risk of dying from anaphylaxis (which is treatable). yet, if we were to order or prescribe chloramphenicol, it is expected that we should counsel the patient that there is a 1:50,000 to 1:300,000 risk of aplastic anemia, which is not treatable except by bone marrow transplantation. hence, it may be asked; what is the best method of informing patients on medication toxicity? it is acceptable to leave this to pharmacists to provide literature on these drugs as the sole form of counseling. it is the prescriber's responsibility to obtain informed consent before ordering the medications. it may be the best policy for prescribers to list the most common side effects, then occasional severe adverse reactions, and mention a possibility of other rare unforeseen adverse reaction (without specifying these latter reactions unless requested by the patient). the details of the counseling may vary on several factors, such as the relative safety profile (therapeutic to toxic ratio), enhanced risk factor for side effects (which may depend on underlying comorbidities or genetic predisposition), and the expected duration of treatment; as the longer an individual is exposed to a drug, the greater the potential for some side effects. the cmpa have provided some guidelines for risk management considerations in prescribing opioids 23 that are useful for all medication orders and may curtail medico-legal cases from drug adverse events. these medico-legal considerations are: 1. is there an appropriate indication for this drug? 2. is the starting dose and need for continuation appropriate? 3. have you considered the need for monitoring that would be reasonable for your patient? 4. have you considered the potential effect of any concomitant medication that might influence the dosing, monitoring, and side effects? 5. have you considered other factors such as comorbidity that might influence the dosing and monitoring? 6. are you prepared to diagnose and manage any adverse event? 7. have you counseled the patient on potential side effects, how to recognize early signs, and necessary actions? 8. when discharging patients, have you provided reasonable information about the risks of adverse reactions, precautions to be observed, and person to notify? patients who suffer from adverse effects may be willing to forgive a physician's failure to provide informed consent when that therapy is indicated. however, in situations where the treatments were not indicated, or of questionable value, then any adverse event would likely be unacceptable to the plaintiff or courts. a 38-year-old male with steroid-dependent crohn's colitis (diagnosed 6 years before) called his fp for advice regarding chickenpox from his young son who was recently diagnosed with it at a daycare center. the patient was experiencing retrosternal and epigastric pain on swallowing. the fp prescribed omeprazole 20 mg once daily and ibuprofen over the phone, without seeing the patient. later in the night of the same day, the man presented to the emergency department of a local hospital. the er physician noted that the patient was chronically on methylprednisolone 8 mg once daily for crohn's disease, and that he had developed local pustules consistent with early varicella within the past 4 days. however, the main concern of the patient was severe retrosternal, mid-chest pain on swallowing and radiating through his back for 24 h. the recorded vital signs showed a temperature of 38.3 c, blood pressure of 155/110 mmhg, heart rate of 81/min, and respiratory rate of 20/min. the examination revealed scattered vesicles/pustules on the patient's face, soft palate, and pharynx. treatment on discharge consisted of liquid bupivacaine swish and swallow (topical anesthetic), oxycodone-acetaminophen, and metoclopramide. an electrocardiogram was normal and the discharge diagnosis listed possible esophageal involvement with varicella. within 72 h, the subject returned to the same er with worsening symptoms, and was seen by the same physician. the symptoms consisted of swelling of his face, fever, sweats, productive cough of blood-streaked sputum, and persistent chest pain. examination reports revealed a very ill looking male with a temperature of 39.5 c, heart rate of 169/min, blood pressure of 131/87 mmhg, and respiratory rate of 30/min. his face was swollen and edematous with closure of the right eye, extensive vesicles and pustules on the face, soft palate with edema and inflammation of the gingivae, and numerous skin lesions over the trunk and proximal limbs. oxygen saturation on room air was 92% and the chest radiograph was reported as normal. investigations revealed anemia, thrombocytopenia, liver disturbance, and evidence of disseminated intravascular coagulopathy. intravenous acyclovir was started and the patient was transferred to the icu of a tertiary care center, where he died within 38 h after the second presentation. autopsy revealed disseminated varicella with involvement of the brain, lung, heart, liver, esophagus, and stomach. the wife and family of the deceased man launched medical malpractice litigation against the fp, er attending physician, and the local hospital. charges against the fp were as follows: (1) substandard care reasonably expected of a general practitioner, (2) he should have advised or warned the patient and provided early treatment, especially since he knew that his son had chickenpox, (3) he knew, or ought to have known that the deceased was immunosuppressed from chronic steroids and therefore at increased risk, (4) he failed to provide medical assistance and prescribe the correct drug (acyclovir) on presentation, (5) he failed to make the patient aware of the potential complications of his long-term steroid use, and (6) he failed to refer the deceased to an appropriate specialist. the accusations against the er attending physician were similar: (1) his negligence was the direct cause of the deceased's death, (2) his medical care fell below the standard reasonably expected from an er physician, (3) he ought to have known that the patient was immunosuppressed from steroids, and therefore at high risk for complications from chickenpox, (4) he failed to provide proper medical assistance and treatment, (5) he failed to appropriately admit the patient on initial presentation and institute intravenous acyclovir, and (6) he failed to consult an appropriate specialist (internist or infectious disease specialist). damages were sought by the plaintiffs for pain and suffering, deprivation of a husband and father, loss of economic benefit afforded to the family from potential employment earnings of the deceased over the next 27 years (assuming retirement at age 65). counsel for the defendants requested expert opinion on two key issues: (1) was the steroid dose the deceased received sufficient to cause immunosuppression? (2) if appropriate therapy with acyclovir were started at initial presentation with chickenpox, would the outcome have been any different? chickenpox (varicella) has dramatically declined in all age groups, but most markedly in children since the introduction of varicella vaccine in 1995 in north america and developed countries. since the introduction of the vaccine, the decline in varicella-related hospitalization in the us was greatest among 0-4 year-old children, but rates also declined in older youths (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) year) and adults. 24 in temperate regions, 90% of cases of varicella occur in children <10 years of age, 5% occur in individuals >15 years old, and adults (>20 year) only account for 2%. the risk of hospitalization and death is greater in young infants and adults than children, and most varicella-related deaths occur in previously healthy people. 25 although varicella is much less common in adults than children, 47% of the deaths from complications occur in adults. 26 in tropical and subtropical countries, the mean age of patients with varicella is higher than in temperate regions, and up to 40% of immigrants from these areas are susceptible to varicella. healthy children rarely suffer from complications of varicella, with the most common one being secondary bacterial infection (streptococcus and staphylococcus) of the skin and soft tissue. immunocompromised children are predisposed to more severe and progressive diseases (up to one third) with multiple organ involvement, lungs, liver, and central nervous system issues being the most frequent. 27 mortality in these children range from 15% to 18% and those with lympho-proliferative malignancies on chemotherapy have the greatest risk. bone marrow transplant recipients also have a high risk of varicella zoster virus (vzv) infection, with a probability of vzv infection at 30% by 1 year after transplant. 28 in a series of 231 cases of vzv infection, 36 presented with chickenpox and 195 with herpes zoster. the overall vzv infection mortality was 9.7% (23 of 231) -all with disseminated infection in the first 9 months. however, the mortality in those with herpes zoster was only 6.6% versus 27.7% of those with varicella. 28 high dose corticosteroids are also associated with significant complications of varicella and herpes zoster. 29 immunosuppression is most commonly seen with high daily dose of 1 mg/kg of prednisone or moderate doses for prolonged periods. rates of infectious complication were not increased in patients given a daily dose of less than 10 mg daily, or a cumulative dose of less than 700 mg prednisone in a meta-analysis of 73 controlled trials. 30 many experts consider prolonged daily dose 15 mg prednisone or equivalent to be immunosuppressive. the us food and drug administration (fda) states that low doses of prednisone (or similar agents) for prolonged periods may also increase the risk of infection. 31 corticosteroids can suppress several stages of the immune response that leads to inflammation, but the main immunosuppressive effect is on the cellular immunity. thus, steroids can increase the risk and severity of a variety of infectious agents (virus, bacteria, fungi, and parasites). most notable are agents that require intact cellular immunity for control and eradication, such as herpes viruses, mycobacteria, listeria, nocardia, pneumocystis, candida, cryptococci, toxoplasma, and strongyloides, etc., are increased in patients on prolonged corticosteroids. the effect of corticosteroids on the inflammatory and immune responses is pleomorphic. an earlier study in guinea pigs demonstrated that similar levels of lymphocytopenia were induced by acute and chronic corticosteroid administration, but only chronic treatment was associated with depression of certain cell-mediated lymphocyte functions. 32 chronic cortisone treatment resulted in marked decrease in both antigen-induced migration inhibitory factor (mif) and proliferation, although mitogen responses remained normal. over the last few decades, corticosteroids have been found to inhibit the function of various cell types: (1) macrophage/monocytesinhibit cyclooxygenase-2 and phospholipase a 2 (interrupting prostaglandin and leukotriene pathways), and suppress cytokine production and release of interleukin (il)-1, il-6, tumor necrosis factor (tnf)-a, (2) endothelial cells -impair endothelial leucocyte adhesion molecule-i (elam-i), and intracellular adhesion molecule-i (icam-i), that are critical for leucocyte localization, (3) basophils -block histamine and leukotriene 4c ige-dependent release, (4) fibroblast -inhibit arachidonic pathway (as with monocytes) and suppress growth factor-induced dna synthesis and fibroblast proliferation, (5) lymphocytes -inhibit cytokines il-1, il-2, il-3, il-6, tnf-a, gm-csf, and interferon g production or expression. 33 the association of steroid therapy and increased risk, severity and complications of vzv infections has been well established for decades. 34 patients receiving highdose corticosteroids are at risk for disseminated disease and fatality, whereas patients on low-dose schedules are not at increased risk. 34, 35 esophagitis and gastrointestinal involvement of vzv are distinctly rare and have been described in both immunocompromised hosts and apparently healthy subjects as complications of chickenpox or herpes zoster. disseminated varicella in autopsy studies of children with acute lymphoblastic leukemia or lymphoma on chemotherapy had demonstrated involvement of the esophagus, small bowel, colon, liver, spleen, and pancreas. 36 fulminant and fatal cases of varicella hepatitis have been described predominantly in immunosuppressed children and adults, but also in healthy people. 37 rare cases of adult varicella on chronic steroids (for asthma) have been reported with small bowel involvement presenting with abdominal pain and gastrointestinal bleeding. 38 however, it appears that the patient may have been on moderately high dose of methylprednisolone (40 mg daily). in an immunocompetent young adult on inhaled steroids for asthma, varicella has been reported to cause diffuse abdominal pain and tenderness with hepatic, esophageal, and pulmonary involvement, with recovery after acyclovir therapy. 39 bullous and necrotic ulcerative lesions of the esophagus and stomach have been described in the pathology literature of fatal varicella as early as 1940. 40 stomach and small bowel changes detected by radiological imaging has also been reported in a case of chickenpox. 41 occasionally healthy adults with varicella may have mild symptoms of esophagitis that respond to antihistamine-h2 blockers, suggesting temporary esophageal reflux. 42 shingles esophagitis have also been seen on endoscopy in patients without widespread dissemination of herpes zoster and benign course. 43 the deceased patient (case 3) was receiving 8 mg daily of methylprednisolone prior to his presentation with chickenpox. this dose is equivalent to 10 mg prednisone and normally would not be considered to be immunosuppressive. however, the course of the disease and widespread dissemination with fatality resembles that of an immunocompromised host. how can we explain this reaction? the possibilities include: (1) inaccurate history of the steroid dose provided by the patient, (2) rarely, dissemination and fatality can occur in healthy adults, (3) unrecognized immunocompromised state such as hiv infection or rare genetic mutations, or polymorphisms in genes involved in cellular immunity, and (4) higher free active concentration of the drug than would be expected. methylprednisolone (medrol) is 70% bound to protein, mainly albumin, and decrease in serum albumin by 30-50% could increase the active unbound drug almost to the same proportion. on admission to hospital, the patient's serum albumin was 15 g/l (lower limit of normal 35 g/l), 42% of the normal lower limit. although the serum albumin can decrease in acute illness from varicella, the half-life of circulating albumin is 15 days and thus, even after 7 days of chickenpox, it should not decrease more than 25% below normal, even if his liver stopped producing any protein (which is not likely). hence, the patient probably had a chronically low serum albumin from his chronic colitis. his free concentration of corticosteroid should have been greater than 50% of his expected active drug, which is equivalent to 15 mg prednisone/day. can this information absolve the defendants from responsibility of the patient's adverse outcome? it could be argued by the defendants that it is not common knowledge or usual practice to consider the protein binding effects of drugs on their toxicity. furthermore, it would not be expected that the fp and er physicians be cognizant of these facts. the defendants maintain that their management did not fall below the expected standard of care, and most reasonable physicians would not have considered the patient immunocompromised on such a low dose of prednisolone. the outcome was unpredictable and only in hindsight was it evident that the deceased was likely immunocompromised and susceptible to a higher risk for adverse outcome. experts' opinions for the plaintiffs' side argued that the involved physicians should have been aware that adults (even normal hosts) are at a greater risk of severe disease and complications than children from chickenpox are. therefore, the fp and er physician were remiss in not prescribing antiviral drug (acyclovir). the er physician should have admitted the deceased at the first presentation and started intravenous acyclovir, as he suspected visceral dissemination (esophagitis) with varicella, irrespective of the immune state of the patient. previous randomized control trial (rct) of oral acyclovir therapy for uncomplicated varicella in healthy adults have reported mild clinical benefit (decrease of symptoms, fever and time to cutaneous healing), but only in those initiating treatment within 24 h of the rash. 44 late treatment (25-72 h) had no benefit. the low frequency of serious complications (pneumonia, encephalitis, or death) precluded any evaluation of acyclovir on these outcomes. in immunocompromised patients with vzv infection, later initiation of therapy (72 h after onset of rash) may be of value. 45, 46 although there is no rct to prove the benefit of intravenous acyclovir in normal adults with varicella complicated by visceral involvement, observational and cohort studies suggest benefit. 47 thus, intravenous acyclovir continues to be the standard therapy for healthy adults and immunocompromised hosts with clinically significant visceral disease (pneumonia, encephalitis) or dissemination. chronic corticosteroid therapy can have numerous side effects and complications. it is important for physicians to counsel their patients on these potential adverse events, and provide a risk-benefit assessment. many organs and systems in the body can be adversely affected by chronic steroid therapy (endocrine, bone, eyes, muscle, brain, immune system, skin, etc.). it is important to counsel on potential increased risk of infectious diseases and certain precautions should be taken before embarking on chronic therapy. these include a mantoux skin test and treatment for latent tuberculosis in those with positive reactions and about to receive prednisone 15 mg/day for 30 days. 48 a baseline chest radiograph for active or inactive disease should be performed beforehand. it is also recommended that steroid dependent children should undergo vzv antibody test, and if this were negative, then varicella vaccination should be offered. 33 it seems prudent to apply these guidelines to adults as well on chronic steroid therapy. for patients with previous chickenpox or adequate antibodies, varicella zoster vaccine may be considered to reduce the risk and severity of shingles. this vaccine, a live attenuated vaccine has been found effective and is recommended for persons 60 years of age to reduce the burden of illness and incidence of postherpetic neuralgia. 49 presently, this vaccine is not indicated in immunocompromised adults, so it should be administered before starting prolonged steroids. the product monograph of zostavax tm (merck), states that the varicella zoster vaccine is contraindicated in patients receiving high-dose corticosteroid, but not contraindicated for individuals on inhaled or low-dose steroids. the varicella vaccine has been found safe in children with moderate immune deficiency, 50 but it is contraindicated in those with substantial suppression of cellular immunity (as with high-dose steroid). 51 what should have been the appropriate steps of action in this case? once the fp was notified that the patient's child had chickenpox, he should have counseled the father and determined his previous past history or antibody level against vzv. for patients considered non-immune and severely immunosuppressed (moderate to high-dose corticosteroid (20 mg/day) vzv immune globulin should be offered and treatment with acyclovir should be instituted at the first sign of varicella. 33 since the deceased was considered to be receiving a low dose of steroid, then it was more appropriate to offer treatment with acyclovir at the first sign of a typical rash, or provide a prescription to be filled within 24 h of onset of varicella. ahfs drug information changes in the transmission of tuberculosis in the new york city from 1990 to 1999 advanced survey of tuberculosis transmission in a complex socioepidemiologic scenario with a high proportion of cases in immigrants trends in tuberculosis incidence -united states the global plan to stop tb 2006-2015. world health organization global tuberculosis control: epidemiology strategy, financing. world health organization report drug-induced liver disease and environmental toxins hepatic toxicity of antitubercular agents. role of different drugs, 199 cases risk factor for isoniazid (inh)-induced liver dysfunction harrison's principles of internal medicine isoniazid antituberculosis drug-induced hepatotoxicity: concise up-to-date review emergency issues in head and neck infections. in: emerging issues and controversies in infectious disease infectious rhinosinusitis in adults: classification, etiology and management bacteriologic finding associated with chronic bacterial maxillary sinusitis in adults adverse reactions to trimethoprim-sulfamethoxazole epidermal necrolysis (stevens-johnson syndrome and toxic epidermal necrolysis) medication use and the risk of stevens-johnson syndrome or toxic epidermal necrolysis scorten: a severity-of-illness score for toxic epidermal necrolysis performance of the scorten during the first five days of hospitalization to predict the prognosis of epidermal necrolysis toxic epidermal necrolysis: does immunoglobulin make a difference? intravenous immunoglobulin treatment for steven-johnson syndrome and toxic epidermal necrolysis: prospective non-comparative study showing no benefit in mortality or progression adverse events -physician-prescried opioids. risk identification for all physicians fitzpatrick's dermatology in general medicine the epidemiology of varicella and its complications varicella complications and cost varicella in children with cancer.: seventy seven cases infection with varicella zoster virus after marrow transplantation varicella and herpes zoster: changing concepts of the natural history, control and importance of a not-so-benign virus (first of two parts) glucocorticoids and infection hormones and synthetic substitutes: adrenals immunosuppressive effects of glucocorticosteroids: differential effects of acute vs chronic administration on cell-mediated immunity adrenocorticotropic hormone, adrenocortical steroids and their synthetic analogs: inhibitors of synthesis and actions of adrenocortical hormones varicella and herpes zoster: changes in concepts of natural history, control and importance of a not-so-benign virus (second of two parts) the human herpesviruses: an interdisciplinary perspective disseminated varicella at autopsy in children with cancer varicella hepatitis in the immunocompromised adult: a case report and review of the literature fatal varicella in an adult: case report and review of gastrointestinal complications of chickenpox digestive manifestations in an immunocompetent adult with varicella visceral lesions associated with varicella cimetidine in "chickenpox esophagitis shingles esophagitis: endoscopic diagnosis in two patients treatment of adult varicella with acyclovir. a randomized placebo-controlled trial acyclovir halts progression of herpes zoster in immunocompromised patients treatment of varicella-zoster virus infections in severely immunocompromised patients early treatment with acyclovir for varicella pneumonia in otherwise healthy adults: retrospective controlled study and review in the clinic: tuberculosis for the shingles prevention study group a vaccine to prevent herpes zoster and postherpetic neuralgia in older patients varicella vaccine in children with acute lymphoblastic leukemia and non-hodgkin lymphoma general recommendations on immunization. recommendations of the advisory committee on immunization practices (acip) key: cord-200147-ans8d3oa authors: arimond, alexander; borth, damian; hoepner, andreas; klawunn, michael; weisheit, stefan title: neural networks and value at risk date: 2020-05-04 journal: nan doi: nan sha: doc_id: 200147 cord_uid: ans8d3oa utilizing a generative regime switching framework, we perform monte-carlo simulations of asset returns for value at risk threshold estimation. using equity markets and long term bonds as test assets in the global, us, euro area and uk setting over an up to 1,250 weeks sample horizon ending in august 2018, we investigate neural networks along three design steps relating (i) to the initialization of the neural network, (ii) its incentive function according to which it has been trained and (iii) the amount of data we feed. first, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the hidden markov). we find latter to outperform in terms of the frequency of var breaches (i.e. the realized return falling short of the estimated var threshold). second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). in particular this design feature enables the balanced incentive recurrent neural network (rnn) to outperform the single incentive rnn as well as any other neural network or established approach by statistically and economically significant levels. third, we half our training data set of 2,000 days. we find our networks when fed with substantially less data (i.e. 1,000 days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets ... while leading papers on machine learning in asset pricing focus on predominantly returns and stochastic discount factors (chen, pelger & zhu 2020; gu, kelly & xiu 2020) , we are motivated by the global coid-19 virus crisis and the subsequent stock market crash to investigate if and how machine learning methods can enhance value at risk (var) threshold estimates. in line with gu, kelly & xiu's (2020: 7) , we like to open by disclaiming our awareness that " [m] achine learning methods on their own do not identify deep fundamental associations" .without human scientists designing hypothesized mechanisms into an estimation problem. 1 nevertheless, measurement errors can be reduced based on machine learning methods. hence, machine learning methods employed as means to an end instead of as end in themselves can significantly support researchers in challenging estimation tasks. 2 in their already legendary paper, gu, kelly & xiu (gkx in the following, 2020) apply machine learning to a key problem in academic finance literature: 'measuring asset risk premia'. they observe that machine learning improves the description of expected returns relative to traditional econometric forecasting methods based on (i) better out-ofsample r-squared and (ii) forecasts earning larger sharpe ratios. more specifically, they compare four 'traditional' methods (ols, glm, pcr/pca, pls) with regression trees (e.g. random forests) and a simple 'feed forward neural network' based on 30k stocks over 720 months , using 94 firm characteristics, 74 sectors and 900+ baseline signals. crediting inter alia (i) flexibility of functional form and (ii) enhanced ability to prioritize vast sets of baseline signals, they find the feed forward neural networks (ffnn) to perform best. contrary to results reported from computer vision, gkx further observe that "'shallow' learning outperforms 'deep' learning" (p.47), as their neural network with 3 hidden layers excels beyond neural networks with more hidden layers. they interpret this result as a consequence of a relatively much lower signal to noise ratio and much smaller data sets in finance. interestingly, the outperformance of nns over the other 5 methods widens at portfolio compared to stock level, another indication that an understanding of the signal to noise ratio in financial markets is crucial when training neural networks. that said, while classic ols is statistically significantly weaker than all other models, nn3 beats all others but not always at statistically significant levels. gkx finally confirm their results via monte carlo simulations. they show that if one generated two hypothetical security price datasets, one linear and un-interacted and one nonlinear and interactive, ols and glm would dominate in former, while nns dominate in the latter. they conclude by attributing the "predictive advantage [of neural networks] to accommodation of nonlinear interactions that are missed by other methods." (p.47) following gkx, an extensive literature on machine learning in finance is rapidly emerging. chen, pelger and zhu (cpz in the following, 2020) introduce more advanced (i.e. recurrent) neural networks and estimate a (i) non-linear asset pricing model (ii) regularized under no-arbitrage conditions operationalized via a stochastic discount factor (iii) while considering economic conditions. in particular they attribute the time varying dependency of the stochastic discount factor of about ten thousand us stocks to macroeconomic state processes via a recurrent long short term memory (lstm) network. in cpz's (2020: 5) view "it is essential to identify the dynamic pattern in macroeconomic time series before feeding them into a machine learning model". avramov et al. (2020) replicate the approaches of gkx's (2020) , cpz (2020) , and two conditional factor pricing models: kelly, pruitt, and su's (2019) linear instrumented principal component analysis (ipca) and gu, kelly, and xiu's (2019) nonlinear conditional autoencoder in the context of real-world economic restrictions. while they find strong fama french six factor (ff6) adjusted returns in the original setting without real world economic constraints, these returns reduce by more than half if microcaps or firms without credit ratings are excluded. in fact, when avramov et al. (2020: 3) are "[e]xcluding distressed firms, all deep learning methods no longer generate significant (valueweighted) ff6-adjusted return at the 5% level." they confirm this finding by showing that the gkx (2020) and cpz (2020) machine learning signals perform substantially weaker in economic conditions that limit arbitrage (i.e. low market liquidity, high market volatility, high investor sentiment). curiously though, avramov et al. (2020: 5) find that the only linear model they analyse kelly et al.'s (2019) ipca -"stands out … as it is less sensitive to market episodes of high limits to arbitrage." their finding as well as the results of cpz (2020) imply that economic conditions have to be explicitly accounted for when analysing the abilities and performance of neural networks. furthermore, avramov et al. (2020) as well as gkx (2020) and cpz (2020) make anecdotal observations that machine learning methods appear to reduce drawdowns. 1 while their manuscripts focused on return predictability, we devote our work to risk predictability in the context of market wide economic conditions. the covid-19 crisis as well as the density of economic crisis in the previous three decades imply that catastrophic 'black swan' type risks occur more frequent than predicted by symmetric economic distributions. consequently, underestimating tail risks can have catastrophic consequences for investors. hence, the analysis of risks with the ambition to avoid underestimations deserves, in our view, equivalent attention to the analysis of returns with its ambition to identify investment opportunities resulting from mispricing. more specifically, since a symmetric approach such as the "mean-variance framework implicitly assumes normality of asset returns, it is likely to underestimate the tail risk for assets with negatively skewed payoffs" (agarwal & naik, 2004:85) . empirically, equity market indices usually exhibit, not only since covid-19, negative skewness in its return payoffs (albuquerque, 2012 , kozhan et al. 2013 . consequently, it is crucial for a post covid-19 world with its substantial tail risk exposures (e.g. second pandemic wave, climate change, cyber security) that investors provided with tools which avoid the underestimation of risks best possible. naturally, neural networks with their near unlimited flexibility in modelling non-linearities appear suitable candidates for such conservative tail risk modelling that focuses on avoiding giglio & xiu (2019) , and kozak, nagel & santosh (2020) as also noteworthy, as are efforts by fallahgouly and franstiantoz (2020) and horel and giesecke (2019) to develop significant tests for neural networks. our paper investigates is basic and/or more advanced neural networks have the capability of underestimating tail risk less often at common statistical significance levels. we operationalize tail risk as value at risk which is the most used tail risk measure in both commercial practice as well as academic literature (billio et al. 2012 , billio and pellizon, 2000 , jorion, 2005 , nieto & ruiz, 2015 . specifically, we estimate var thresholds using classic methods (i.e. mean/variance, hidden markov model) 1 as well as machine learning methods (i.e. feed forward, convolutional, recurrent), which we advance via initialization of input parameter and regularization of incentive function. recognizing the importance of economic conditions (avramov et al. 2020 , chen et al. 2020 , we embed our analysis in a regime-based asset allocation setting. specifically, we perform monte-carlo simulations of asset returns for value at risk threshold estimation in a generative regime switching framework. using equity markets and long term bonds as test assets in the global, us, euro area and uk setting over an up to 1,250 weeks sample horizon ending in august 2018, we investigate neural networks along three design steps relating (i) to the initialization of the neural network's input parameter, (ii) its incentive function according to which it has been trained and which can lead to extreme outputs if it is not regularized as well as (iii) the amount of data we feed. first, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the hidden markov). we find latter to outperform in terms of the frequency of var breaches (i.e. the realized return falling short of the estimated var threshold). second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). this design features leads to better regularization of the neural network, as it substantially reduces extreme outcomes than can result from a single incentive function. in particular this design feature enables the balanced incentive recurrent neural network (rnn) to outperform the single incentive rnn as well as any other neural network or established approach by statistically and economically significant levels. third, we half our training data set of 2,000 days. we find our networks when fed with substantially less data (i.e. 1,000 days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets. our contributions are fivefold. first, we extend the currently return focused literature of machine learning in finance (avramov et al. 2020 , chen et al. 2020 gu et al. 2020) to also focus on the estimation of risk thresholds. assessing the advancements that machine learning can bring to risk estimation potentially offers valuable innovation to asset owners such as pension funds and can better protect the retirement savings of their members. 2 second, we advance the design of our three types of neural networks by initializing their input parameter with the best established model. while initializations are a common research topic in core machine learnings fields such as image classification or machine translation (glorot & bengio, 2010 , we are not aware of any systematic application of initialized neural networks in the field of finance. hence, demonstrating the statistical superiority of an initialized neural network over itself non-initialized appears a relevant contribution to the community. third, while cpz (2020) regularize their neural networks via no arbitrage conditions, we regularize via balancing the incentive function of our neural networks on multiple objectives (i.e. estimation accuracy and empirically realistic regime distributions). this prevents any single objective from leading to extreme outputs and hence balances the computational power of the trained neural network in desirable directions. in fact, our results show that amendments to the incentive function maybe the strongest tool available to us in engineering neural networks. fourth, we also hope to make a marginal contribution to the literature on value at risk estimation. whereas our paper is focused on advancing machine learning techniques and is therefore following billio and pellizon (2000) anchored in a regime based asset allocation setting 1 to account for time varying economic states (cpz, 2020), we still believe that the nonlinearity and flexible form especially of recurrent neural networks maybe of interesting to the var (forecasting) literature (billio et al. 2012 , nieto & ruiz, 2015 , patton et al. 2019 . fifth, our final contribution lies in the documentation of weaknesses of neural networks as applied to finance. while avramov et al. (2020) subjects neural networks to real world economic constraints and finds these to substantially reduce their performance, we expose our neural networks to data scarcity and document just how much data these new approaches need to advance the estimation of risk thresholds. naturally, such long data history may not always be available in practice when estimating asset management var thresholds and therefore established methods and neural networks are likely to be used in parallel for the foreseeable future. in section two, we will describe our testing methodology including all five competing models (i.e. mean/variance, hidden markov model, feed forward neural network, convolutional neural network, recurrent neural network). section three describes data, model training, monte carlo simulations and baseline results. section four then advances our neural networks via initialization and balancing the incentive functions and discusses the results of both features. section five conducts robustness tests and sensitivity analyses before section six concludes. 1 we acknowledge that most recent statistical advances in value at risk estimation have concentrated on jointly modelling value at risk and expected shortfall and were therefore naturally less focused on time varying economic states (patton et al. 2019 , taylor 2019 , 2020 ). value at risk estimation with mean/variance approach when modelling financial time series related to investment decisions the asset return of portfolio (p) at time (t) as defined in equation (1) below is the focal point of interest instead of asset price , since investors earn on the difference between the price at which they sold. value-at-risk (var) metrics are an important tool in many areas of risk management. our particular focus on var measures as a means to perform risk budgeting in asset allocation. asset owners such as pension funds or insurances as well as asset managers often incorporate var measures into their investment processes (jorion, 2005) . value at risk is defined as in equation (2) as the lower bound of a portfolio's return, which the portfolio or asset is not expected to fall short off with a certain probability (a) within the next period of allocation (n). pr ( + < − ( )) = for example, an investment fund indicates that, based on the composition of its portfolio and on current market conditions, there is a 95% or 99% probability it will not lose more than a specified amount of assets over the next 5 trading days the var measurement can be interpreted as a threshold (billio and pellizon 2000) . if the actual portfolio or asset return falls below this threshold, we refer to this a var breach. the classic mean variance approach of measuring var values is based on the assumption that asset returns follow a (multivariate) normal distribution. var thresholds can then be measured by estimating the mean and covariance ( , σ) of the asset returns by calculating sample mean and sample covariance of the respective historical window. the 1% or 5% percentile of the resulting normal distribution will be an appropriate estimator of the 95% or 99% var threshold. we refer to this way of estimating var thresholds as being the "classical" approach and use it as baseline of our evaluation. this classic approach, however, does not sufficiently reflect the skewness of real world equity markets and the divergences of return distributions across different economics regimes. in other words, the classic approach does not take into account longer term market dynamics, which express themselves as phases of growth or of downside, also commonly known as bull market and bear markets. for this purpose, regime switching models have grown in popularity well before machine learning entered finance (billio and pellizon 2000) . in this study, we model financial markets inter alia using neural networks while accounting for shifts in economics regimes (avramov et al. 2020 , chen et al., 2020 . due to the generative nature of these networks, they are able to perform monte-carlo simulation of future returns, which could be beneficial for var estimation. in asset manager's risk budgeting it is advantageous to know about the current market phase (regime) and estimate the probability that the regime changes (schmeding et al., 2019) . the most common way of modelling market regimes is by distinguishing between bull markets and bear markets. unfortunately, market regimes are not directly observable, but are rather to be derived indirectly from market data. regime switching models based on hidden markov models are an established tool for regime based modelling. hidden markov models (hmm)which are based on markov chains -are models that allow for analysing and representing characteristics of time series such as negative skewness (ang and bekaert, 2002; timmerman, 2000) . we employ the hmm for the special case of two economic states called 'regimes' in the hmm context. specifically, we model asset returns y t ∈ n (we are looking at n ≥ 1 assets) at time t to follow an n-dimensional gaussian process with hidden states s ∈ {1, 2} as shown in equation (3): the returns are modelled to have state dependent expected returns μ ∈ as well as covariance σ ∈ . the dynamic of is following a homogenous markov chain with transition probability matrix with = ( = 1 | | −1 = 1 ) and = ( = 2 | | −1 = 2 ) . this definition describes if and how states are changing over time. it is also important to note the 'markov property' that the probability of being in any state at the next point in time only depends on the present state, not the sequence of states that preceded it. furthermore, the probability of being in a state at a certain point in time is given as π = ( = 1) and (1 − π ) = ( = 2). this is also called smoothed state probability. by estimating the smoothed probability πt of the last element of the historical window as the present regime probability, we can use the model to start from there and perform monte-carlo simulations of future asset returns for the next days. 1 this is outlined for the two-regimes case in figure 1 below. 2 figure 1 : algorithm for the hidden markov monte-carlo simulation (for two regimes) 1: estimate = ( 0 , , , σ) from history when graves [13] successfully made use of a long short-term memory (lstm) based recurrent neural network to generate realistic sequences of handwriting, he followed the idea of using a mixture density network (mdn) to parametrize a gaussian mixture predictive distribution (bishop, 1995) . compared to standard neural networks (multi-layer perceptron) as used by gkx (2020), this network does not only predict the conditional average of the target variable as point estimate (in gkx' case expected risk premia), but rather estimates the conditional distribution of the target variable. given the autoregressive nature of graves' approach, the output distributions are not assumed to be static over time, but dynamically conditioned on previous outputs, thus capturing the temporal context of the data. we consider both characteristics as being beneficial for modelling financial market returns, which experience a low signal to noise ratio as highlighted by gkx' results due to inherently high levels of intertemporal uncertainty. the core of the proposed neural network regime switching framework is a (swappable) neural network architecture, which takes as input the historical sequence of daily asset returns. at the output level, the framework computes regime probabilities and provides learnable gaussian mixture distribution parameters, which can be used to sample new asset returns for monte-carlo simulation. a multivariate gaussian mixture model (gmm) is a weighted sum of k different components, each following a distinct multivariate normal distribution as shown in equation (5): a gmm by its nature does not assume a single normal distribution, but naturally models a random variable as being the interleave of different (multivariate) normal distributions. in our model, we interpret k as the number of regimes and φi explains how much each regime contributes to the (current output). in other words, φi can be seen as the probability that we are in regime i. in this sense the gmm output provides a suitable level of interpretability for the use case of regime based modelling. with regard to the neural network regime switching model, we extend the notion of a gaussian mixture by conditioning φi via a yet undefined neural network f on the historic asset returns within a certain window of a certain size. we call this window receptive field and denote its size by r: this extension makes the gaussian mixture weights dependent on the (recent) history of the time varying asset returns. note that we only condition φ on the historical returns. the other parameters of the gaussian mixture ( , σ ), are modelled as unconditioned, yet optimizable parameters of the model. this basically means we assume the parameters of the gaussians to be constant over time (per regime). this is in contrast to the standard mdn, where ( , σ ) are also conditioned on and therefore can change over time. 1 keeping these remaining parameters unconditional is crucial to allow for a fair comparison between the neural networks and the hmm, which also exhibits time invariant parameters ( , σ ) in its regime shift probabilities. following graves (2013), we define the probability given by the network and the corresponding sequence as shown in equation (7) and (8), respectively: since financial markets operate in weekly cycles with many investors shying away from exposure to substantial leverage during the illiquid weekend period, we are not surprised to observe that model training is more stable when choosing the predictive distribution to not only be responsible for the next day, but for the next 5 days (hann and steuer, 1995) . we call this forward looking window the lookahead. this is also practically aligned with the overall investment process, in which we want to appropriately model the upcoming allocation period, which usually spans multiple days. it also fits with the intuition that regimes do not switch daily but have stability at least for a week. the extended sequence probability and sequence loss are denoted accordingly in equations (9) and (10): an important feature of the neural network regime model is how it simulates future returns. we follow graves (2013) approach and conduct sequential sampling from the network. when we want to simulate a path of returns for the next n business days, we do this according to the algorithm displayed in figure 2 . in accordance with gkx (2020) we first focus our analysis on traditional "feed-forward" neural networks before engaging in more sophisticated neural network architectures for time series analysis within the neural network regime model. the traditional model of neural networks, also called multi-layer perceptron, consists of an "input layer" which contains the raw input predictors and one or more "hidden layers" that combine input signals in a nonlinear way and an "output layer", which aggregates the output of the hidden layers into a final predictive signal. the nonlinearity of the hidden layers arises from the application of nonlinear "activation functions" on the combined signals. we visualise the traditional feed forward neural network and its input layers in figure 4 . we setup our network structure in alignment with gkx's (2020) best performance neural network 'nn3'. the setup of our network is thus given with 3 hidden layers with decreasing number of hidden units (32, 16, 8) . since we want to capture the temporal aspect of our time series data, we condition the network output on at least a receptive field of 10 days. even though the receptive field of the network is not very high in this case, the dense structure of the network results in a very high number of parameters (1698 in total, including the gmm parameters). in between layers, we make use of the activation function tanh. convolutional neural networks (cnns) can also be applied within the proposed neural network regime switching model. recently, cnns gained popularity for time series analysis, as for example van den oord et al. (2015) successfully applied convolutional neural networks on time series data for generating audio waveforms, the state-ofthe-art text-to-speech and music generation. their adaption of convolutional neural networkscalled wavenethas shown to be able to capture long ranging dependencies on sequences very well. in its essence, a wavenet consists of multiple layers of stacked convolutions along the time axis. crucial features of these convolutions are that they have to be causal and dilated. causal means that the output of a convolution only depends on past elements of the input sequence. dilated convolutions are ones that exhibit "holes" in their respective kernel, which effectively means that its filter size increases while being dilated with zeros in between. wavenet typically is constructed with increasing dilation factor (doubling in size) in each (hidden) layer. by doing so, the model is capable of capturing an exponentially growing number of elements from the input sequence depending on the number of hidden convolutional layers in the network. the number of captured sequence elements is called the receptive field of the network (and in this sense is equal to the receptive field defined for the neural network regime model). 1 the convolutional neural network (cnn), due to its structure of stacked dilated convolutions, has a much greater receptive field than the simple feed forward network and needs much less weights to be trained. we restricted the number of hidden layers to 3 to illustrate the idea. our network structure has 7 hidden layers. each hidden layer furthermore exhibits a number of channels, which are not visualized here. figure 5 illustrates the networks basic structure as a combination of stacked causal convolutions with a dilation factor of d = 2. the backing model presented in this investigation is inspired by wavenet, we restrict the model to the basic layout, using causal structure and increasing dilation between layers. the output layer comprises the regime predictive distributions by applying a softmax function to the hidden layers' outputs. our network consists of 6 hidden layers, each layer having 3 channels. the convolutions each have a kernel size of 3. in total, the network exhibits 242 weights (including gmm parameters), the receptive field has a size of 255 days. as graves (2013) was very successful in applying lstm for generating sequences, we also adapt this approach for the neural network regime switching model. originally introduced by hochreiter and schmidhuber (1997), a main characteristic of lstmswhich are a sub class of recurrent neural networks -is its purpose-built memory cells, which allows it to capture long range dependencies in the data. from a model perspective, lstms differ from other neural network architectures in that they are applied recurrently (see figure 6 ). the output from a previous sequence of the network function servesin combination with the next sequence element -as input for the next application of the network function. in this sense, the lstm can be interpreted as being similar to an hmm, in that there is a hidden state which conditions the output distribution. however, the lstm hidden state not only depends on its previous states, but it also captures long term sequence dependencies through its recurrent nature. maybe most notably, the receptive field size of an lstm is not bound architecture wise as in case of simple feed forward network and cnn. instead, the lstm's receptive field depends solely on the lstms ability to memorize the past input. in our architecture we have one lstm layer with a hidden state size of 5. in total, the model exhibits 236 parameters (including the gmm parameters). the potential of lstms was noted by cpz (2020: 6) who note that "lstms are designed to find patterns in time series data and … are among the most successful commercial ais". 3 assessment procedure we obtain daily price data for stock and bond indices globally for three major global markets (i.e. eu, uk, us) to study the presented regime based neural network approaches on a variety of stock markets and bond markets. for each stock market, we focus on one major stock index. for bond markets, we further distinguish between long term bond indices (7-10 years) and short term bond indices (1-3 years). the markets in scope are (1) the data dates back to at least january 1990 and ends with august 2018, which means covering almost 30 years of market development. hence, the data also accounts for crises like the dot-com bubble in the early 2000s as well as the financial crisis of 2008. this is especially important for testing the regime based approaches. the price indices are given as total return indices (i.e. dividends treated as being reinvested) to properly reflect market development. the data is taken from refinitiv's datastream. descriptive statistics are displayed in table 1 , whereby panel a displays a daily frequency and panel b a weekly frequency. mean returns for equities exceed the returns for bond whereby the longer bond return more than the shorter one. equities have naturally a much higher standard deviation and a far worse minimum return. in fact, equity returns in all four regions lose substantially more money than bond return even at the 25 th percentile, which highlights that the holy grail of asset allocation is the ability to predict equity market drawdowns. furthermore, equity markets tend to bequite negatively skewed as expected while short bonds experience a positive skewness, which reflects previous findings (albuquerque, 2012 , kozhan et al. 2013 ) and the inherent differential in the riskiness of both asset's payoffs. [insert table 1 about here] the back testing is done on a weekly basis via a moving window approach. at each point in time, the respective model is fitted by providing the last 2,000 days (which is roughly 8 years) as training data. we choose this long range window, because neural networks are known to need big datasets as inputs and it is reasonable to assume that over eight years include simultaneously times of (at least relative) crisis and times of market growth. covering both bull and bear markets in the training sample is crucial to allow the model to "learn" these types of regimes. 1 for all our models we set the number of regimes to k = 2. as we back test an allocation strategy with a weekly re-allocation, we set the lookahead for the neural network regime models to 5 days. we further configured the back testing dates to always align with the end of a business week (i.e. fridays). the classic approach does not need any configuration, model fitting is same as computing sample mean and sample covariance of the asset returns within the respective window. the hmm also does not need any more configuration, the baum-welch algorithm is guaranteed to converge the parameters into a local optimum with respect to the likelihood function (baum, 1970) . for the neural network regime models, additional data processing is required to learn network weights that lead to meaningful regime probabilities and distribution parameters. an important pre-processing step is input normalization, as it is considered good practice for neural network training (bishop, 1995) . for this purpose, we normalize the input data by ' = ( − ( )) / ( ) . in other words, we demean the input data and scale them by their variance but without removing the interactions between the assets. we train the network by using the adamax optimizing algorithm (kingma & ba, 2014) and at the same time applying weight decay to reduce overfitting (krogh & hertz, 1992) . learning rate and number of epochs configured for training vary depending on the model. in general, estimating parameters of a neural network model is a non-convex optimization problem. thus, the optimization algorithm might become stuck in an infeasible local optimum. in order to mitigate this problem, it is common practice to repeat the training multiple times, starting off having different (usually randomly chosen) parameter initializations, and then averaging over the resulting models or picking the best in terms of loss. in this paper, we follow a best-out-of-5 approach, that means each training is done five times with varying initialization and the best one is selected for simulation. the initialization strategy, which we will show in chapter 4.1, further mitigates this problem by starting off from an economically reasonable parameter set. we observe that the in-sample regime probabilities learned by the neural network regime switching models as compared to those estimated by the hmm based regime switching model generally show comparable results in terms of distribution and temporal dynamics. when we set k = 2 and the model fits two regimes with nearly invariably one having a positive corresponding equity means and low volatility, and the other experiencing a low or negative equity mean and high volatility. these regimes can be interpreted as bull and bear market, respectively. the respective insample regime probabilities over time also show strong alignment with growth and drawdown phases. this holds true for the vast majority of seeds and hence indicates that the neural network regime model is a valid practical alternative for regime modelling when compared to a hidden markov model. after training the model for a specific point in time, we start a monte carlo simulation of asset returns for the next 5 days (one week -monday to friday). for the purpose of calculating statistically solid quantiles of the resulting distribution, we simulate 100,000 paths for each model. we do this for at least 1093 (emu), and at most 1250 (globally) points in time within the back-test history window. as soon as we have simulated all return paths, we calculate a total (weekly) return for each path. the generated weekly returns follow a non-trivial distribution, which arises from the respective model and its underlying temporal dynamics. based on the simulations we compute quantiles for value at risk estimations. for example, the 0.01 and 0.05 percentile of the resulting distribution represent the 99% and 95% -5 day -var metric, respectively. we evaluate the quality of our value at risk estimations by counting the number of breaches of the asset returns. in case, the actual return is below the estimated var threshold, we count this as a breach. assuming an average performing model, it is e.g. reasonable to expect 5% breaches for a 95% var measurement. we compared the breaches of all models with each other. we classify a model as being superior to another model, if the number of var breaches is less than those from the compared model. a value comparison comp = 1.0(= 0.0) indicates that the row model is superior (inferior) to the column model. we performed significance tests by applying paired t-tests. we further evaluated a dominance value which is defined as shown in equation (11): in our view the three most crucial design features of neural networks in finance, where the sheer number of hidden layers appears less helpful due to the low signal to noise ratio (gkx, 2020), are: amount of input data, initializing information and incentive function. big input data is important for neural networks, as they need to consume sufficient evidence also of rarer empirical features to ensure that their nonlinear abilities in fitting virtually any functional form are used in a relevant instead of an exotic manner. similarly, the initialization of input parameters should be as much as possible based on empirically established estimates to ensure that the gradient descent inside the neural network takes off from a suitable point of departure, thereby substantially reducing the risks that a neural network confuses itself into irrelevant local minima. on the output side, every neural network is trained according to an incentive (i.e. loss) function. it is this particular loss function which determines the direction of travel for the neural network, which has no other ambitions than to minimize its loss best possible. hence, if the loss function only represents one of several practically relevant parameters, the neural network may come to results with bizarre outcomes for those parameters not included in its incentive function. in our case, for instance, the baseline incentive is just estimation accuracy which could lead to forecasts dominated much more by a single regime than ever observed in practice. in other words, after a long bull market, the neural network could "conclude" that bear markets do not exist. metaphorically spoken, a unidimensional loss function in a neural network has little decency (marcus, 2018) . commencing with the initialization and the incentive functions, we will assess our three neural networks in the following vis a vis classic and hmm approach, where each of the three networks is once displayed with an advanced design feature and once with a naïve design feature. if no specific initialization strategy for neural networks is defined, it occurs entirely random, normal via a computer generated random number. where established econometric approaches use naïve priors (i.e. mean), neural networks originally relied on brute force computing power and a bit of luck. hence, it is unsurprising that initializations are a common research topic in core machine learnings fields such as image classification or machine translation (glorot & bengio, 2010 nowadays. however, we are not aware of any systematic application of initialized neural networks in the field of finance. hence, we compare naïve neural networks, which are not initialized with neural networks that have been initialized with the best available prior. in our case, the best available prior for , σ of the model is the equivalent hmm estimation based on the same window. 1 such initialization is feasible, since the structure of the neural network -due to its similarity with respect to , σis broadly comparable with the hmm. in other words, we make use of already trained parameters from hmm training as starting parameters for the neural network training. in this sense, initialized neural networks are not only flexible in their functional form, they are also adaptable to "learn" from the best established model in the field if suitably supervised by the human data scientists. metaphorically spoken, our neural networks can stand on the shoulders of the giant that hmm is for regime based estimations. table 2 presents the results by comparing breaches between the two classic approaches (mean/variance, hmm) and the non-initialized and hmm initialized neural networks across all four regions. panel a and b display the 1% var threshold for equities and long bonds, respectively, while panels c and d show the equivalent comparison for 5% var thresholds. 2 note that for model training we apply a best-out-of-5 strategy as described in section 3.2. that means we repeat the training five times, starting off with random parameter initializations each time. in case of the presented hmm initialized model, we apply the same strategy, with the exception that , σ of the model are initialized the same for each of the five iterations. all residual parameters are initialized randomly as fits best according to the neural network part of the model. xxx findings are observable: first, not a single var threshold estimation process in a single region and in either of the two asset classes was able uphold its promise in that an estimated 1% var threshold should be breached no more than 1% of the time. this is very disappointing and quite alarming for institutional investors such as pension funds and insurance since it implies that all approachesestablished and machine learning basedfail to sufficiently capture downside tail risks and hence underestimate 1% var thresholds. the vast majority of approaches estimate var thresholds that occur in more than 2% of the cases and the lstm fails entirely if not initialised. in fact, even the best method, the hmm for us equities, estimates var thresholds which are breached in 1.34% of the cases. second, when inspecting the ability of our eight methods to estimate 5% var thresholds, the result remains bad but is less catastrophic. the mean/variance approach, the hmm and the initialised lstm display cases where their var thresholds were breaches in less than the expected 5%. the mean/variance and hmm approach make their thresholds in 3 out of 8 cases and the initialised lstm in 1 out of 8. overall, this is still a disappointing performance, especially for the feed forward neural network and the cnn. 1 even though we initialize , σ from hmm parameters, we still have weights to be initialized arising from the temporal neural network part of the model. we do this on a per layer level by sampling uniformly as where i is the number of input units for this layer. 2 we focus our discussion of results on the equities and long bonds since these have more variation, lower skewness and hence risk. results for the short bonds are available upon request from the contact author. third, when comparing the initialised with the non-initialised neural networks, the performance is like day vs. night. the non-initialised neural networks perform always worse and the lstm performs entirely dismal without a suitable prior. when comparing across all eight approaches, the hmm appears most competitive which means that we either have to further advance the design of our neural networks or their marginal value add beyond classic econometric approaches appears inexistent. to advance the design of our neural networks further, we aim to balance its utility function to avoid extreme unrealistic results possible in the univariate case. [insert table 2 about here] whereas cpz (2020) regularize their neural networks via no arbitrage conditions, we regularize via balancing the incentive function of our neural networks on multiple objectives. specifically, we extend the loss function to not only focus on accuracy of point estimates but also give some weight to eventually achieving empirically realistic regime distributions (i.e. in our data sample across all four regions no regimes display more than 60% frequency on a weekly basis). this balanced extension of the loss function prevents the neural networks from arriving at bizarre outcomes such as the conclusion that bear markets (or even bull markets) barely exist. technically, such bizarre outcomes result from cases where the regime probabilities φi(t) tend to converge globally either into 0 or 1 for all t, which basically means the neural network only recognises one-regime. to balance the incentive function of the neural network and facilitate balancing between regime contributions, we introduced an additional regularization term reg into the loss function which penalizes unbalanced regime probabilities. the regularization term is displayed in equation (13) below. if bear and bull market have equivalent regime probabilities the term converges to 0.5, while it converges towards 1 the larger the imbalance between the two regimes. substituting equation (13) into our loss function of equation (10), leads to equation (14) below, which doubles the point estimation based standard loss function in case of total regime balance inaccuracy but adds only 50% of the original loss function in case of full balance. conditioning the extension of the loss function on its origin is important to avoid biases due to diverging scales. setting the additional incentive function to initially have half the marginal weight of the original function also seems appropriate for comparability. the outcome of balancing the incentive functions of our neural networks are displayed in table 3 , where panels a-d are distributed as previously in table 2 . the results are very encouraging, especially with regards to the lstm. the regularized lstm is in all 32 cases (i.e. 2 thresholds, 2 asset classes, 4 regions) better than the non-regularized lstm. for the 5% var thresholds, it reaches realized occurrences of less than 4% in half the cases. this implies that the regularized lstm can even be more cautious than required. the regularized lstm also sets a new record for the 1% [insert table 4 about here] to measure how much value the regularized lstm can add compared to alternative approaches, we compute the annual accumulated costs of breaches as well as the average cost per breach. they are displayed in table 5 for the 5% var threshold. the regularized lstm is for both numbers in any case better than the classic approaches (mean/variance ad hmm) and the difference is economically meaningful. for equities the regularized lstm results in annual accumulated costs of 97-130 basis points less than the classic mean/variance approach, which would be up to over one billion us$ avoid loss per annum for a > us$100 billion equity portfolios of pension fund such as calpers or pggm. compared to the hmm approach, the regularized lstm avoids annual accumulated costs of 44-88 basis points, which is still a substantial amount of money for the vast majority of asset owners. with respect to long bonds, where total returns are naturally lower, the regularized lstm's avoided annual costs against the mean/variance and the hmm approach range between 23-30 basis points, which is high for bond markets. [insert table 5 about here] these statistically and economically attractive results have been achieved, however, based on 2,000 days of training data. such "big" amounts of data may not always be available for newer investment strategies. hence, it is natural to ask if the performance of the regularized neural networks drop when fed with just half the data (i.e. 1,000 days). apart from reducing statistical power, a period of over 4 years also may comprise less information on downside tail risks. indeed, the results displayed in table 6 show that in all context of var thresholds and asset classes, the regularized networks trained on 2,000 days substantially outperform and usually dominate their equivalently designed neural networks with half the training data. hence, the attractive risk management features for hmm initialised, balanced incentive lstms are likely only available for established discretionary investment strategies where sufficient historical data is available or for entirely rules-based approaches whose history can be replicated ex-post with sufficient confidence. [insert table 6 about here] we further conduct an array of robustness tests and sensitivity analysis to challenge our results and the applicability of neural network based regime switching models. as first robustness test, we extend the regularization in a manner that the balancing incentive function of equation (13) has the same marginal weight than the original loss function instead of just half the marginal weight. the performance of both types of regularized lstms is essentially equivalent second, we study higher var thresholds such as 10% and find the results to be very comparable to the 5% var results. third, we estimate monthly instead of weekly var. accounting for the loss of statistical power in comparison tests due to the lower number of observations, the results are equivalent again. we conduct two sensitivity analysis. first, we set up our neural networks to be generalized by two balancing incentive functions but without hmm initialisation. the results show the regularization enhances performance compared to the naïve non-regularized and non-initialized models but that both design features are needed to achieve the full performance. in other words, initialization and regularization seem additive design features in terms of neural network performance. second, we run analytical approaches with k > 2 regimes. adding a third or even fourth regime when asset prices only know two directions leads to substantial instability in the neural networks and tends to depreciate the quality of results. inspired by gkx (2020)'s and cpz (2020) to outperform the single incentive rnn as well as any other neural network or established approach by statistically and economically significant levels. third, we half our training data set of 2,000 days. we find our networks when fed with substantially less data (i.e. 1,000 days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets. hence, we conclude that well designed neural networks, i.e. a recurrent lstm neural network initialized with best current evidence and balanced incentivescan potentially advance the protection offered to institutional investors by var thresholds through a reduction in threshold breaches. however, such advancements rely on the availability of a long data history, which may not always be available in practice when estimating asset management var thresholds. descriptive statistics of the daily returns of the main equity index (equity), the main sovereign bond with (short) 1-3 years maturity (sb1-3y) and the main sovereign bond (long) with 7-10 year maturity (sb7-10). descriptive statistics include sample length, the first three moments of the return distribution and 11 thresholds along the return distribution. risks and portfolio decisions involving hedge funds skewness in stock returns: reconciling the evidence on firm versus aggregate returns can machines learn capital structure dynamics? working paper international asset allocation with regime shifts machine learning, human experts, and the valuation of real assets machine learning versus economic restrictions: evidence from stock return predictability a maximization technique occurring in the statistical analysis of probabilistic functions of markov chains bond risk premia with machine learning value-at-risk: a multivariate switching regime approach econometric measures of connectedness and systemic risk in the finance and insurance sectors neural networks for pattern recognition deep learning in asset pricing subsampled factor models for asset pricing: the rise of vasa microstructure in the machine age towards explaining deep learning: significance tests for multi-layer perceptrons asset pricing with omitted factors how to deal with small data sets in machine learning: an analysis on the cat bond market understanding the difficulty of training deep feedforward neural networks generating sequences with recurrent neural networks autoencoder asset pricing models much ado about nothing? exchange rate forecasting: neural networks vs. linear models using monthly and weekly data j. long short-term memory towards explainable ai: significance tests for neural networks improving earnings predictions with machine learning. working paper jorion, p.. value at risk characteristics are covariances: a unified model of risk and return adam: a method for stochastic optimization shrinking the cross-section the skew risk premium in the equity index market a simple weight decay can improve generalization advances in financial machine learning deep learning: a critical appraisal frontiers in var forecasting and backtesting dynamic semiparametric models for expected shortfall (and value-at-risk) maschinelles lernen bei der entwicklung von wertsicherungsstrategien. zeitschrift für das gesamte kreditwesen deep learning for mortgage risk forecasting value at risk and expected shortfall using a semiparametric approach based on the asymmetric laplace distribution forecast combinations for value at risk and expected shortfall moments of markov switching models verstyuk, s. 2020. modeling multivariate time series in economics: from auto-regressions to recurrent neural networks. working paper fixup initialization: residual learning without normalization. interantional conference on learning representations (iclr) paper acknowledgments: we are grateful for comments from theodor cojoianu, james hodson, juho kanniainen, qian li, yanan, andrew vivian, xiaojun zeng and participants at 2019 financial data science association conference in san francisco the international conference on fintech and financial data science at university college dublin (ucd). the views expressed in this manuscript are not necessarily shared by sociovestix labs, the technical expert group of dg fisma or warburg invest ag. authors are listed in alphabetical order, whereby hoepner serves as the contact author (andreas.hoepner@ucd.ie). any remaining errors are our own. key: cord-001064-59i3jert authors: ashbolt, nicholas j.; amézquita, alejandro; backhaus, thomas; borriello, peter; brandt, kristian k.; collignon, peter; coors, anja; finley, rita; gaze, william h.; heberer, thomas; lawrence, john r.; larsson, d.g. joakim; mcewen, scott a.; ryan, james j.; schönfeld, jens; silley, peter; snape, jason r.; van den eede, christel; topp, edward title: human health risk assessment (hhra) for environmental development and transfer of antibiotic resistance date: 2013-07-09 journal: environ health perspect doi: 10.1289/ehp.1206316 sha: doc_id: 1064 cord_uid: 59i3jert background: only recently has the environment been clearly implicated in the risk of antibiotic resistance to clinical outcome, but to date there have been few documented approaches to formally assess these risks. objective: we examined possible approaches and sought to identify research needs to enable human health risk assessments (hhra) that focus on the role of the environment in the failure of antibiotic treatment caused by antibiotic-resistant pathogens. methods: the authors participated in a workshop held 4–8 march 2012 in québec, canada, to define the scope and objectives of an environmental assessment of antibiotic-resistance risks to human health. we focused on key elements of environmental-resistance-development “hot spots,” exposure assessment (unrelated to food), and dose response to characterize risks that may improve antibiotic-resistance management options. discussion: various novel aspects to traditional risk assessments were identified to enable an assessment of environmental antibiotic resistance. these include a) accounting for an added selective pressure on the environmental resistome that, over time, allows for development of antibiotic-resistant bacteria (arb); b) identifying and describing rates of horizontal gene transfer (hgt) in the relevant environmental “hot spot” compartments; and c) modifying traditional dose–response approaches to address doses of arb for various health outcomes and pathways. conclusions: we propose that environmental aspects of antibiotic-resistance development be included in the processes of any hhra addressing arb. because of limited available data, a multicriteria decision analysis approach would be a useful way to undertake an hhra of environmental antibiotic resistance that informs risk managers. citation: ashbolt nj, amézquita a, backhaus t, borriello p, brandt kk, collignon p, coors a, finley r, gaze wh, heberer t, lawrence jr, larsson dg, mcewen sa, ryan jj, schönfeld j, silley p, snape jr, van den eede c, topp e. 2013. human health risk assessment (hhra) for environmental development and transfer of antibiotic resistance. environ health perspect 121:993–1001; http://dx.doi.org/10.1289/ehp.1206316 a workshop (antimicrobial resistance in the environment: assessing and managing effects of anthropogenic activities), held in march 2012 in québec, canada, focused on anti biotic resistance in the environment and approaches to assessing and managing effects of anthropogenic activities. the human health concern was identified as environmentally derived antibioticresistant bacteria (arb) that may adversely affect human health (e.g., reduced efficacy in clinical anti biotic use, more serious or prolonged infection) either by direct exposure of patients to antibiotic resistant pathogen(s) or by exposure of patients to resistance determinants and subsequent horizontal gene transfer (hgt) to bacterial pathogen(s) on or within a human host, as conceptualized in figure 1 . arb hazards develop in the environment as a result of direct uptake of antibioticresistant genes (arg) via various mechanisms (e.g., mobile genetic elements such as plasmids, integrons, gene cassettes, or transposons) and/or proliferate under environmental selection caused by anti biotics and coselecting agents such as biocides, toxic metals, and nanomaterial stressors (qiu et al. 2012; taylor et al. 2011) , or by gene mutations (gillings and stokes 2012) . depending on the presence of recipient bacteria, these processes generate either environmental antibioticresistant bacteria (earb) or pathogens with antibioticresistance (parb) (figure 1 ). human health risk assessment (hhra) is the process used to estimate the nature and probability of adverse health effects in humans who may be exposed to hazards in contaminated environmental media, now or in the future [u.s. environmental protection agency (epa) 2012]. in this review we focus on how to apply hhra to the risk of infec tions with pathogenic arb because they are an increasing cause of morbidity and mor tality, particularly in developing regions background: only recently has the environment been clearly implicated in the risk of antibiotic resistance to clinical outcome, but to date there have been few documented approaches to formally assess these risks. objective: we examined possible approaches and sought to identify research needs to enable human health risk assessments (hhra) that focus on the role of the environment in the failure of anti biotic treatment caused by antibiotic-resistant pathogens. methods: the authors participated in a workshop held 4-8 march 2012 in québec, canada, to define the scope and objectives of an environmental assessment of antibiotic-resistance risks to human health. we focused on key elements of environmental-resistance-development "hot spots," exposure assessment (unrelated to food), and dose response to characterize risks that may improve antibiotic-resistance management options. discussion: various novel aspects to traditional risk assessments were identified to enable an assessment of environmental antibiotic resistance. these include a) accounting for an added selective pressure on the environmental resistome that, over time, allows for development of antibioticresistant bacteria (arb); b) identifying and describing rates of horizontal gene transfer (hgt) in the relevant environmental "hot spot" compartments; and c) modifying traditional dose-response approaches to address doses of arb for various health outcomes and pathways. conclusions: we propose that environmental aspects of antibiotic-resistance development be included in the processes of any hhra addressing arb. because of limited available data, a multicriteria decision analysis approach would be a useful way to undertake an hhra of environmental antibiotic resistance that informs risk managers. citation: ashbolt nj, amézquita a, backhaus t, borriello p, brandt ). an antimicrobial resistant micro organism has the ability to mul tiply or persist in the presence of an increased level of an anti microbial agent compared with a susceptible counter part of the same species. for this review, we limited the resistant group of micro organisms to bacteria and therefore to anti biotic resistance, an area in which the term "antibiotic" is used synonymously with "antibacterial." it is important to understand the contribution that the environment has on the development of resistance in both human and animal pathogens because therapeutic resistant infections may lead to longer hos pitalization, longer treatment time, failure of treatment therapy, and the need for treatment with more toxic or costly antibiotics, as well as an increased likelihood of death. a vast amount of work has been under taken to understand the contribution and roles played by hospital and community settings in the dissemination and maintenance of arb infections in humans. a particular area of focus in terms of exposure in a community setting has been anti biotic use in livestock produc tion and the presence of earb and parb in food of animal origin. in 2011, the codex alimentarius commission [established in 1963 by the food and agriculture organization of the united nations (fao) and the world health organization (who) to harmonize international food standards, guidelines, and codes of practice to protect the health of con sumers and ensure fair trade practices in the food trade] released guidelines on processes and methodologies for applying risk analy sis methods to foodborne anti microbial resis tance related to the use of anti microbials in veterinary medicine and agriculture (codex alimentarius commission 2011). other sources of anti biotics and other anti microbials in the environment are human sewage (dolejska et al. 2011) , intensive ani mal husbandry, and waste from the manu facture of pharmaceuticals (larsson et al. 2007 ). the environmental consequences from the use and release of anti biotics from various sources (kümmerer 2009a (kümmerer , 2009b and the hgt of antibioticresistance genes (arg) between indigenous environmental and pathogenic bacteria and their resistance determinants (börjesson et al. 2009; chagas et al. 2011; chen et al. 2011; cummings et al. 2011; forsberg et al. 2012; gao et al. 2012; qiu et al. 2012 ) has yet to be quanti fied, but is of global concern (finley et al. 2013; who 2012a) . the genetic elements encoding for the ability of micro organisms to withstand the effects of an anti microbial agent are located either chromosomally or extra chromosomally and may be associated with mobile genetic elements such as plas mids, integrons, gene cassettes, or transpo sons, thereby enabling horizontal and vertical transmission from resistant to previously susceptible strains. from an hhra point of view, the emergence of arb in source and drinking water (de boeck et al. 2012; isozumi et al. 2012; shi et al. 2013 ) further highlights the need to place these emerging environmental risks in perspective. yet, assess ing the range of environmental contribu tions to anti biotic resistance may not only be complicated by lack of quantitative data but also by the need to coordinate efforts across different agencies that may have jurisdiction over environmental risks versus human and animal health. a key consideration for arb develop ment in the environment is that resistance genes can be present due to natural occur rence (d'costa et al. 2011 ). further, the use of anti microbials in crops, animals, and humans provides a continued entry of anti biotics to the environment, along with pos sible novel genes and arb. a summary of the fate, transport, and persistence of antibiotics and resistance genes after land application of waste from food animals that received antibiotics or following outflow to surface water from sewage treatment has emphasized the need to better understand the environ mental mechanisms of genetic selection and gene acquisition as well as the dynamics of resistance genes (resistome) and their bacte rial hosts (cheesanford et al. 2009; crtryn 2013) . for example, the presence of anti biotic residues in water from pharma ceuti cal manufacturers in certain parts of the world (fick et al. 2009 ), ponds receiving intensive animal wastes (barkovskii et al. 2012) , aqua culture waters (shah et al. 2012) , and sewage outfalls (dolejska et al. 2011 ) are important sources, among others, leading to the pres ence of arg in surface waters. in particu lar, the comparatively high concentrations of anti biotics found in the effluent of pharma ceuti cal production plants have been asso ciated with an increased presence of arg in surface waters (kristiansson et al. 2011; li et al. 2009 li et al. , 2010 . most recently, 100% sequence identity of arg from a diverse set of clinical pathogens and common soil bacte ria (forsberg et al. 2012 ) has highlighted the potential for environ mental hgt between earb and parb. despite these concerns, few risk assess ments have evaluated the combined impacts of anti biotics, arg, and arb in the environ ment on human and animal health (keen and montforts 2012) . recent epidemiological stud ies have included evaluation of arb in drink ing water and the susceptibility of commensal escherichia coli in household members. for example, coleman et al. (2012) reported that water, along with other factors not directly related to the local environment, accounted for the presence of resistant e. coli in humans. in many studies, native bacteria in drinking water systems have been shown to accumulate arg (vazmoreira et al. 2011) . in addition to addressing environmental risks arising from the development of anti biotic resistance, we should also consider the or development and enrichment of parb low probability but high impact "onetime event" type of risk. this exceedingly rare event that results in the transfer of a novel (to clinically important bacteria) resistance gene from a harmless environmental bacterium to a pathogen need happen only once if a human is the recipient of the novel parb. unlike the emergence of sars (severe acute respira tory syndrome) and similar viruses where, in hindsight, the risk factors are now well under stood (swift et al. 2007 ), the conditions for a "onetime event" could occur in a range of "normal" habitats. once developed, the resis tant bacterium/gene has a possibility to spread between humans around the world [such as seen with the spread of ndm1 (new delhi metallobetalactamase1) resistance (wilson and chen 2012) ], promoted by our use of anti biotics. although it seems very difficult to quantify the probability for such a rare event (including assessing the probability for where it will happen and when), there is consider able value in trying to identify the risk factors (such as pointing out critical environments for hgt to occur, or identifying pharmaceutical exposure levels that could cause selection pres sures and hence increase the abundance of a given gene). after such a critical hgt event, we may then move into a more quantitative kind of hhra. the overall goal of the workshop (anti microbial resistance in the environment: assessing and managing effects of anthropogenic activities) was to identify the significance of arb within the environment and to map out some of the complexities involved in order to identify research gaps and provide statements on the level of scientific understanding of various arb issues. a broad range of international delegates, including aca demics, government regulators, industry mem bers, and clinicians, discussed various issues. the focus of this review arose from discussions of improving our understanding of human health risks-in addition to epidemiological studies-by developing hhras to explore potential risks and inform risk manage ment. because the end goal of an assessment depends on the context (e.g., research, regulation), we provide a generic approach to under taking an hhra of environmental arb that can be adapted to the users' interest (conceptualized in figure 1 ). given the many uncertainties, we also highlight identified research gaps. understanding other on going relevant inter national activities and the types of anti biotics used provide good starting points to aid in framing a risk assessment of arb. the codex alimentarius commission (2011) described eight principles that are specific to risk analysis for foodborne anti microbial resistance, several of which are generally applicable to a hhra of environ mental arb. examples include the recommendations of the joint fao/who/ oie expert meeting on critically important antimicrobials (food and agriculture organization of the united nations/world health organization/world organisation for animal health 2008) and the who advisory group on integrated surveillance of antimicrobial resistance (who 2012b), which provided information for setting the priority anti biotics for a human risk assess ment. it should be noted that there are sig nificant national and regional differences in anti biotic use, resistance patterns, and human exposure pathways. in general, risk assessments are framed by identifying risks and management goals, so the assessment informs the need for possible management options and enables evaluation of management success. the consensus of workshop participants was that manage ment could best be applied at points of anti biotic manufacturing and use, agricultural operations including aquaculture, and wastewater treat ment plants (pruden et al. 2013) . assessing the relative impact of managing any particular part of a system is hampered by the lack of knowledge on the relative importance of each part of the system for the overall risk. that is, as recently stated by the who (2013), "amr is a complex problem driven by many inter connected factors so single, isolated interventions have little impact and coordi nated actions are required." hence, a start ing point for an assessment of environmental anti bioticresistance risks intended to aid risk management is a theo retical risk assessment pathway based on a) local surveillance data on the occurrence and types of anti biotics used in human medi cine, crop production, animal husbandry, and companion animals; b) infor mation on arg and arb in the various environmental compartments (in particular, soil and aquatic systems including drinking water); and c) related disease information. this assessment should be amended by discussion with the relevant stakeholders, which requires extensive risk communication and could form part of the multi criteria decision analysis (mcda) approach discussed in detail below. as a result of the workshop, pruden et al. (2013) also advocate coupling environ mental manage ment and mitigation plans with tar geted surveillance and monitoring efforts in order to judge the relative impact and success of the interventions. to undertake a useful human health risk assessment, some details require quantitative measures. thus, the key issue is how experi mental and modeling approaches can be used to derive estimates. furthermore, haz ard concentration, time, and environ mental compartmentdependent aspects should also be taken into account. first, the current understanding is that for nonmutation derived antibiotic resistance to develop in environmental bacteria (including pathogens that may actively grow outside of hosts) to develop into earb/parb ( figure 1 , pro cesses 1 and 2), a selective pressure (i.e., pres ence of anti biotics or antibioticresistance determinants) must be maintained over time in the presence of arg; for existing parb released into the environment, sur vival in environmental media is the critical factor. however, the exact mechanisms and quantitative relationships between selective pressures and arb development have yet to be elucidated, and they may be different depending on the anti biotic, bacterial spe cies, and resistance mechanisms involved. in cases where selective pressure is removed, the abundance of antibioticresistance arb may be reduced, but not to extinction. hughes 2010, 2011; cottell et al. 2012) . even a small number of arb at the com munity level represents a reservoir of arg for horizontal transfer once pressure is reap plied. because it seems inevitable that arb will eventually develop against any anti biotic (levy and marshall 2004) , the key manage ment aim seems to be to delay and confine such a development as much as possible. second, a robust quantitative risk assess ment will require rates of hgt and/or gene mutations in the relevant compartments ( figure 1 , processes 3-5) to be described for different combinations of donating earb strains and receiving parb strains. the lack of quantitative estimates for mutation/hgt of arg is a major data gap. third, traditional microbial risk assess ment dose-response approaches (figure 1, processes 6 and 8) could be used to address the likeli hood of infection [codex alimentarius commission 2011; u.s. epa and u.s. department of agriculture/food safety and inspection service (usda/fsis) 2012], but the novel aspect required here-in addition to hgt and arb selection-would be to address quantitative dose-response relation ships for earb (in the presence of a sensitive pathogen in or on a human) (figure 1 , pro cesses 3 and 6). importantly, the key difference from traditional hhra undertaken in some jurisdictions is that it is essential to include environmental processes to fully assess human risks associated with anti biotic resistance. therefore, the type of information that should be documented for a human healthoriented risk assessment of environmental arb includes the following [adapted from codex alimentarius commission (2011)]: • clinical and environmental surveillance programs for anti biotics, arb, and their determinants, with a focus on regional data volume 121 | number 9 | september 2013 • environmental health perspectives reporting the types and use of anti biotics in human medicine, crops, and commercial and companion animals, as well as globally where crops and food animals are produced • epidemiological investigations of outbreaks and sporadic cases associated with arb, including clinical studies on the occurrence, frequency, and severity of arb infections • identification of the selection pressures (time and dose of selecting/coselecting agents) required to select for resistance in differ ent environments, and subsequent hgt to humanrelevant bacteria, both based on reports describing the frequency of hgt and uptake of arg into environmental bac teria, including environmental pathogens, in previously identified hot spots • human, laboratory, and/or field animal/crop trials addressing the link between anti biotic use and resistance (particularly regional data) • investigations of the characteristics of arb and their determinants (ex situ and in situ) • studies on the link between resistance, viru lence, and/or ecological fitness (e.g., surviv ability or adaptability) of arb • studies on the environmental fate of anti biotic residues in water and soil and their bioavailability associated with the selection of arb in any given environmental com partment, animal, or human host result ing in parb • existing risk assessments of arb and related pathogens. in summary, many sources of data are required to undertake a human health risk assessment for environ mental arb, and much of the data may be severely limited (particularly for a quantitative assessment). thus, the final risk assessment report should emphasize the importance of the evidence trail and weight of evidence for each finding. furthermore, when models are constructed, previously unused data sets should be consid ered for model verifications where possible. human health risk assessment of anti biotics in the environment builds on traditional chemical risk assessments (national research council 1983), starting, for example, with an accept able daily intake (adi) based on resistance data (vich steering committee 2012). a corresponding metric for environ mental anti biotic concentration could be developed based on the concept of the minimum selective concentration (msc) (gullberg et al. 2011) , defined as the minimum concentration of an anti biotic agent that selects for resistance. unlike the traditional chemical risk assess ment approach, with the msc assay it would be necessary to address the human health effects arising from args and the resistance determinants that give rise to arb, including resistance associated with mutations (figure 1 , processes 1 and 2). in the absence of specific data, an msc assay could inform a risk asses sor of the selective concentration of a pharma ceutical or complex mixture of compounds in a matrix of choice, allowing description of thresholds for significant arb development. pathogen risks may be evaluated through microbial risk assessment (mra), a struc tured, systematic, sciencebased approach that builds on the chemical risk assessment paradigm; the mra involves a) problem for mulation (describing the hazards, risk setting, and pathways), b) exposure assessment of the hazard (arb, arg), c) dose-response assess ment that quantifies the relationship between hazard dose and parb infection in humans (figure 1, processes 6 and 7) , and d) com bination of these procedures to characterize risk for the various pathways of exposure to pathogens identified to be assessed. an mra is used qualitatively or quantitatively to evalu ate the level of exposure and subsequent risk to human health from microbiological haz ards. in the context of anti bioticresistant micro organisms, environmental mra is in its infancy but is needed to address resistant bac teria and/or their determinants. the mra was originally developed for fecal pathogen hazards in food and water [ilsi (international life sciences institute) 1996], with more recent modifications to include biofilmassociated environmental pathogens such as legionella pneumophila (schoen and ashbolt 2011) . some human pathogens can grow in the envi ronment (and may become parb; figure 1 , processes 1 and 2), and many will infect only compromised individuals (generally termed opportunistic pathogens). over the past 20 years, the mra has largely evolved by input from the inter national food safety community, and it is now a wellrecognized and accepted approach for food safety risk analysis. in 1999, the codex alimentarius adopted the principles and guidelines for the conduct of microbiological risk assessment (cac/gl30) (codex alimentarius commission 2009). the most recent codex alimentarius guidelines for risk analysis of foodborne antimicrobial resistance include eight principles (codex alimentarius commission 2011), and in the united states, mra guidelines for food and water (u.s. epa and usda/fsis 2012) continue to use the fourstep framework originally described for chemical risk assessment. several arb risk assessments have been published and reviewed in recent years (geenen et al. 2010; mcewen 2012; snary et al. 2004) . however, nearly all of these studies focus on foodborne transmis sion; human health risk assessments dealing with arb transmission via various environ mental routes or direct contact with arg are sparse. for example, geenen et al. (2010) studied extendedspectrum betalactamase (esbl) producing bacteria and identified the following risk factors: previous admission to healthcare facilities, use of anti microbial drugs, travel to highendemic countries, and having esbl positive family members. the authors con cluded that an environ mental risk assessment would be helpful in addressing the problem of esblproducing bacteria but that none had been performed. hazard identification and hazard charac terization. unfortunately, we are unaware of data that quantitatively link arg uptake and human health effects (figure 1 , processes 3 and 6). what data do exist and are rapidly improving in quality, however, are on the presence of args within various environ mental compartments (allen et al. 2009; cummings et al. 2011; ham et al. 2012) , specifically on clinically rele vant resistance genes within soils (forsberg et al. 2012 ) (figure 1 , process 1). precursors that lead to the develop ment of arb hazards include arg and mecha nisms to mobilize these genes, anti biotics, and coselecting agents (qiu et al. 2012; taylor et al. 2011 ) along with gene mutations (gillings and stokes 2012) . depending on the presence of recipient bac teria, these processes generate either earb or parb (figure 1, processes 1 and 2) . in regard to the numerous parameters rele vant to individual environmental compart ments, we are not aware of the availability of comprehensive data on a) anti biotic resistance development by anti biotics and other coselect ing agents; b) the flow of arg (resistome) and acquisition elements (e.g, integrons) in native environmental compartment bacteria; or c) the likely range in rates of horizontal and vertical gene transfer within environ mental compartments. nonetheless, factors that are considered important include the range of potential pathways involving the release of anti biotics, arg, and arb into and amplify ing in environmental compartments such as the rhizosphere, bulk soil, compost, biofilms, wastewater lagoons, rivers, sedi ments, aqua culture, plants, birds, and wildlife. with respect to anti biotics, in general, the following information is required to aid haz ard characterization: a) a list of the local anti biotic classes of concern, b) what is known of their environmental fate, and c) where they may accumulate, in particular, environmental compartments (e.g., the rhizosphere, general soil, compost, biofilms, wastewater lagoons, rivers, sediments, aquaculture, plants, birds, wildlife, farm animals, or companion ani mals). selection for arb (figure 1 , process 2) will depend on the type and in situ bio availability of selecting/coselecting agents, the abundance of bacterial host, and the abun dance of ar determinants. selection for arb is further modulated by the nutritional status of members of the rele vant bacterial community because high meta bolic activity and high cell density promote bacterial community succession and hgt (brandt et al. 2009; sørensen et al. 2005) . in contrast, hgt is relatively independent of anti biotics-although anti biotics and arb may be cotransported (chen et al. 2013 )and increases in hgt rates are thought to occur in stressed bacteria. for example, integrase expression can be upregulated (increased) by the bacterial sos response (process for dna repair) in the presence of certain anti biotics (guerin et al. 2009 ). although quantitative data that describe the development of parb in the environment are lacking, ample evidence exists for the co uptake by an antibioticsensitive pathogen in the presence of an anti biotic, arg (such as on a plasmid with metal resistance), and/or carbon utilization genes (knapp et al. 2011; laverde gomez et al. 2011) , or as demon strated in vitro for a disinfectant/nanomaterial (qiu et al. 2012; soumet et al. 2012) . the spatial distribution of organisms (opportunity for close proximity) may also affect gene transfer, which results from inher ent patchi ness, soil structure, presence of substrates, and so forth. in considering gene transfer rates, there may be hot spots of activ ity; for example, there is evidence for hgt of clinically rele vant resistance genes between bacteria in manureimpacted soils (forsberg et al. 2012 ) and in association with the rhi zosphere because of its organicrich condi tions (pontiroli et al. 2009 ). in addition, selection pressures for subsequent prolifera tion of earb may be higher in these hot spot environments (brandt et al. 2009; li et al. 2013) . therefore, it is important to reco gnize likely zones of high activity during the prob lem formulation and hazard characterization stages of a risk assessment, and when using sampling to identify in situ exchange rates. as an example marker of anthropogenic impact with potential to predict the impact on the mobile resistome, class 1 integrons could be used because of their ability to integrate gene cassettes that confer a wide range of anti biotic and biocide resistance (gaze et al. 2011) . in semipristine soils, prevalence may be two or three orders of magnitude lower than in impacted soils and sedi ments (0.001 vs. 1%, respectively) (gaze et al. 2011; zhu et al. 2013) . in addition to a huge diversity of earb hazards, there are several pathogens that could be evaluated in microbial risk assess ments: a) foodborne and waterborne fecal pathogens represented by campylobacter jejuni, salmonella enterica, or various patho genic e. coli; and b) environ mental pathogens, such as respiratory, skin, or wound pathogens represented by legionella pneumophila, staphylococcus aureus, and pseudomonas aeruginosa. each of these fecal and environmental pathogens is well known to be able to acquire arg; thus, given further data on environmen tal hgt rates, they could be used as refer ence pathogens in microbial risk assessments. however, what is much more problematic for risk assessment-and a current limiting factor-is the rate at which the indigenous bacteria transfer resistance to these pathogens within each environmental compartment and within the human/animal host (figure 1 , pro cesses 3-5). methods to model and experi mentally derive relevant information on these environmental issues are discussed below in "environmental exposure assessment." data on hgt within the human gastro intestinal tract have been summarized by hunter et al. (2008) . dose-response relationships. to properly charac terize human risks, it is typical to select hazards for which there are dose-response health data described either deterministically or stochastically, as available for the refer ence enteric pathogens (e.g., campylobacter jejuni, salmonella enterica, e. coli) (schoen and ashbolt 2010) , but these dose-response health data have yet to be quantified for the skin/wound reference pathogens (mena and gerba 2009; rose and haas 1999) . however, as noted above for processes 1-5, (figure 1 ), an important difference for arb is the need to account for the phenomena associated with selective environmental pressures for the development of arb, and ultimately that form the human infective dose of either earb or parb. the exact mechanisms and doseresponse relationships have yet to be eluci dated, and may be different depending on the bacterial species and resistance mechanisms involved. nevertheless, it seems reasonable for the non compromised human exposed to a parb to fit the published dose-response/ infection relationship (e.g., derived from "feeding" trials with healthy adults or from information collected during outbreak inves tigations) for strains of the same pathogen without antibiotic resistance. what appears more limiting are dose-response models that describe the probability of illness based on the conditional probability of infection and including people who are already compro mised, such as those under going anti biotic therapy. although there is definitive data on parb being more pathogenic or causing more severe illness than their antimicrobial susceptible equivalents (barza 2002; helms et al. 2004 helms et al. , 2005 travers and barza 2002) , that may not always be the case (evans et al. 2009; wassenaar et al. 2007) . clear examples of excess mortality include associ ated blood stream infections for methicillin resistant staphylococcus aureus (mrsa) and from third generation cephalosporinresistant e. coli (g3crec). in 2007 in participating european countries, 27,711 cases of mrsa were associated with 5,503 excess deaths and 255,683 excess hospital days, and 15,183 epi sodes of g3crec blood stream infections were responsible for 2,712 excess deaths and 120,065 extra hospital days (de kraker et al. 2011) . the authors predicted that the combined burden of resistance of mrsa and g3crec will likely lead to a pre dicted incidence of 3.3 associated deaths per 100,000 inhabitants in 2015. yet for many regions of the world, such predictions are less well understood. the final step of mra is risk charac teriza tion, which integrates the outputs from the hazard identification, the hazard charac terization, dose response, and the exposure assessment with the intent to generate an overall estimate of the risk. this estimate may be expressed in various measures of risk, for example, in terms of individual or popula tion risk, or an estimate of annual risk based on exposure to specific hazard(s). depending on the purpose of the risk assessment, the risk characterization can also include the key scientific assumptions used in the risk assessment, sources of variability and uncer tainty, and a scientific evalua tion of risk management options. based on our conceptualization of the pro cesses important to undertake hhra of arb (figure 1 ), most elements related to arb development in environmental media (pro cesses 1, 2, and 4) have been addressed above in "hazard identification and hazard charac terization." here we focus on describing important environmental compartments for and human exposure to arb (figure 1 , pro cesses 3 and 6). concentrations of environ mental factors (such as anti biotics) and arb, along with their fate and transport to points of human uptake, are critical to exposure assessment. for a particular human health risk assessment of arb, it would be impor tant to select/expand on individual pathway scenarios (identifying critical environmental compartments to human contact) relevant to the anti biotic/resistance determinants identi fied in the problem formulation and hazard characterization stages. compartments of potential concern include soil environments receiving animal manure or biosolids, compost, and lagoons, rivers, and their sediments receiving waste waters (chen et al. 2013 ). more traditional routes of human exposures to contaminants that could include earb and parb are drinking water, recreational and irrigation waters impacted by sewage and/or anti biotic volume 121 | number 9 | september 2013 • environmental health perspectives production wastewaters, food, and air affected by farm buildings and exposure to farm ani mal manures, as discussed by pruden et al. (2013) . what is emerging as an important research gap is the in situ development of arb within biofilms (boehm et al. 2009) and their associated freeliving protozoa that may protect and transport arb (abraham 2010) to and within drinking water systems (schwartz et al. 2003; silva et al. 2008 ). this latter route could be particularly problem atic for hospital drinking water systems where an already vulnerable population is exposed. in addition, with the increasing use and exposure to domestically collected rainwa ter, atmospheric fallout of arb may "seed" household systems (kaushik et al. 2012) . after identifying anti biotic concentra tions and pathogen densities in the environ ment, as well as possible levels and rates of arb generation in each environmental compartment, a range of fate and transport models are available to estimate the amounts of anti biotics, pathogens, arb, and arg at points of human contact (figure 1 , pro cesses 3 and 6). such models are largely based on hydro dynamics, with pathogenspecific parameters to account for likely inactivation/ predation in soil and aquatic environments, such as sunlight inactiva tion (bradford et al. 2013; cho et al. 2012; ferguson et al. 2010) . a key aspect of the fate and transport models is the ability to account for the inherent vari ability of any system component. in addition, our uncertainties in assessing model parameter values should be factored into fate and trans port models such as by using bayesian syn thesis methods (albert et al. 2008; williams et al. 2011) . to better account for param eter uncertainties, more recent models are including bayesian learning algorithms that help to integrate information using meteo rologic, hydrologic, and microbial explana tory variables (dotto et al. 2012; motamarri and boccelli 2012) . overall, these models also help to identify management opportunities to mitigate exposures to arb and anti biotics and are an important aspect in describing the path ways of hazards to points of human exposure in any risk assessment. considering the complexity of exposure path ways associated with environmental arb risks and the large uncertainty in the input data for some nodes along the various exposure path ways, outputs would inevitably be difficult for decision makers to interpret and could in fact be counter productive. thus, there is merit in considering decision analysis approaches to prioritize risks, guide resource allocation and data collection activities, and facilitate decision making. although there is a range of ranking options, uses of weightings, and selection criteria (cooper et al. 2008; pires and hald 2010) , as well as failure mode and effects analysis (pillay and wang 2003) , in the overall area of microbial risk assessment, there is a consolidation to mcda approaches that may include bayesian algorithms (lienert et al. 2011; ludwig et al. 2013; ruzante et al. 2010) . approaches such as mcda are designed to provide a structured framework for mak ing choices where multiple factors need to be considered in the decisionmaking pro cess. mcda is a wellestablished tool that can be used for evaluating and document ing the importance assigned to different fac tors in ranking risks (lienert et al. 2011) , albeit heavily dependent on expert opinion. in the context of mra, mcda has been used to rank foodborne microbial risks based on multiple factors, including public health, market impacts, consumer perception and acceptance, and social sensitivity (ruzante et al. 2010) , as well as to prioritize and select inter ventions to reduce pathogen exposures (fazil et al. 2008) . examples of mcda applications in structuring decisions for man aging eco toxico logi cal risks have also been reported (linkov et al. 2006; semenzin et al. 2008 ) and provide useful mcda approaches. mcda could be used, for example, to evalu ate and rank the relative risks between habi tats highly polluted with anti biotics, arg, and arg determinants, as described above for possible hot spots for hgt and develop ment of arb. mcda could be applied to evaluate the relative contribution of coselect ing agents (e.g., detergents, biocides, met als, nano materials) from various sources to the overall risk of arb in the environment. moreover, for a range of anti biotics consid ered to be of environmental concern, mcda approaches could be used for risk ranking according to criteria based on relevant con tributing factors (e.g., mobility of resistance determinants in genetic elements, antibiotic resistance transfer rates in different environ mental compartments, accumulation levels of anti biotics in environmental compartments, environmental fate and transport to expo sure points). in the mcda process, it is also important to identify low probability but high impact "onetimeevent" types of risk. because mcda techniques rely on expert opinion (which is often regarded as a limi tation of such approaches), wellstructured and recognized elicitation practices should be used in order to avoid introduction of biases and errors by subjective scoring. in contrast, one of the main advantages of mcda tech niques is that they capture a consensus opin ion among an expert group about the most relevant criteria and their relative weight on the decision. there are several research gaps that need to be addressed. in particular, specific atten tion should be paid to contaminated habitats (hot spots) in which anti biotics, coselecting agents, bacteria carrying resistance determi nants on mobile genetic elements, and favor able conditions for bacterial growth and activity-all conditions expected to favor hgt-prevail at the same time. however, because these data are currently very limited, workshop participants evaluated alternative ways and possible experimental methods to address these data gaps for hhra as they relate to the processes identified in figure 1 . assays to determine msc (processes 1, 2, and 4) . assays could be developed to mea sure msc (gullberg et al. 2011 ) for a range of anti biotics and environmental conditions. for example, assays could be developed and validated in sandy and clay soils, different sediments, and water types, with isogenic pairs of the model organism inoculated into the matrix of choice and subjected to a titra tion of the selective agent to sufficiently high dilution. selection at sub inhibitory concen trations and assay development for environ mental matrices are key areas of research that need to be addressed. however, overall care is needed when interpreting ex situ studies and extrapolating to in situ environmental condi tions, as well as in dealing with illdefined hazard mixtures in the environment. (processes 1, 2, and 4) . hot spots, locations where a highlevel of hgt and anti biotic resistance develop, may, for instance, include aquatic environments affected by pharma ceutical industry effluents, aqua culture, or sewage discharges, as well as terrestrial environments affected by the deposition of biosolids or animal manures. the degree of persistence of anti biotic resistance (i.e., the rate by which resistance disappears without having an environ mental selection pressure for resistance) must also be considered for risk assessment and will depend on the fit ness cost of resistance. however, the fitness costs within complex and variable environ ments are difficult to assess. furthermore, standard methods have not been developed for evaluating environ mental selection pres sures in complex microbial communities, but several experimental approaches are possible and have been described elsewhere (berg et al. 2010; brandt et al. 2009 ). the approaches identified by berg et al. (2010) and brandt et al. (2009) could be labo ra tory based (to assess the potency of known compounds/mixtures) or applied in the field to assess whether the environment in question (with, for example, its unknown mixture of chemicals) is a hot spot. defining "critical exposure levels" is therefore an important hhra output to aid manage ment activities, which will likely vary between and within environmental compartments, depending on the location. screening for novel resistance determinants (to reduce process 2). screening procedures could be introduced early in the development cycle of novel anti biotics to ensure that exist ing resistance determinants are not prevalent in environmental compartments. marked recipient strains could be inoculated into environmental matrices [e.g., soil, biosolids, or fecal slurry (with sterilized matrix equiva lents as negative controls)], incubated, and then seeded onto media containing the study compound along with a selective anti biotic to recover marked recipient strains demon strating resistance. plasmids, or the entire genome of the recipient, could then be cloned into small insert expression vectors, transformed into e. coli or other hosts, and seeded back onto media containing the study compound. in this way, novel resistance determinants would be identified. alternatively, functional meta genomics could be used to identify novel resistance determinants in meta genomic dna (allen et al. 2009 ). in brief, dna would be extracted from an environmental sample, cloned into an expression vector, and trans formed into a bacterial host such as e. coli. transformants could then be screened on the study compound and resistance genes identi fied using transposon muta genesis followed by sequencing and bio informatic analyses. this would allow detection of novel resistance determinants that may not be plasmid borne but may transfer to other pathogens. dose-response data needs (processes 3, 5, and 6). we were unaware of dose-response data representing a combined arg and a recipient, previously susceptible pathogen dose, and human or animal disease (figure 1 , processes 3 and 5). in contrast, various exam ples illustrate increased morbidity and mor tality when humans are exposed to parb, as described above in "dose-response rela tionships." hence, existing published doseresponse models for non resistant pathogens may not be appropriate to use beyond the end point of infection, and further dose-response models that address people of various lifestages need to be described and summarized to facilitate parb risk assessments. there is also a need to develop dose-response information for sec ondary illness end points (sequelae) resulting from parb infections. regarding anti biotic concentration and time of exposure giving rise to selection of parb within a human (for couptake of earb and a sensitive pathogen), safety could be based on the effective concentration for the specific anti biotic under consideration. in other words, screening values to determine whether further action is warranted could be derived from the acute or mean daily anti biotic intake, with uncertainty factors applied as appropriate, until future research is under taken on pathogen anti bioticresponse changes in the presence of specific anti biotic treatment. alternatively, epidemiological data from exist ing clones of anti bioticresistant strains (e.g., ndm1) could provide useful data for doseresponse and exposure assessments. options for ranking risks (overall hhra). in the absence of fully quantitative data to undertake a hhra, riskranking approaches based on exposure assessment modeling could be adopted and developed to inform allocation of resources for data generation as part of an hhra of arb. evers et al. (2008) presented one such approach in the context of food safety for estimating the relative contribution of campylobacter spp. sources and transmis sion routes on exposure per personday in the netherlands. their study included 31 transmis sion routes related to direct contact with animals and ingestion of food and water, and resulted in a ranking of the most significant sources of exposure. although their study focused on foodborne transmission routes and did not deal with anti bioticresistant campylobacter strains, a similar approach could be applied to estimate human exposure to arb hazards using the environmental exposure pathways described by evers et al. (2008) . this would require data on the prevalence of arg and arb, as well as lev els of anti biotics present in all exposure routes to be considered in the risk assessment. although such an approach is probably not currently fea sible, improved environmental data for a select number of pathogen-gene combinations could be developed in the future. an alternative approach to capturing knowledge of experts and other stakeholders could be to develop a bayesian network based on expert knowledge and add to that as data become available, as described for campylo bacters in foods by albert et al. (2008) . because we are addressing an inter national problem and because the precautionary approach is used in many jurisdictions, there are many risk management approaches that can be implemented now, before anti biotic resistance issues worsen, as noted in the related risk management paper resulting from the workshop (pruden et al. 2013) . furthermore, many current risk management schemes start the process from a management perspec tive and delve into quantitative assessments as needed in order to improve risk manage ment actions, such as in the who water safety plans (who 2009) . we propose that environmental aspects of anti bioticresistance development be included in the processes of any hhra addressing arb. in general terms, an mra appears suitable to address environ mental human health risks posed by the envi ronmental release of anti biotics, arb, and arg; however, at present, there are still too many data gaps to realize that goal. further development of this type of approach requires data mining from previous epidemiological studies to aid in model development, param eterization, and validation, as well as in the collection of new information, particularly that related to conditions and rates of arb development in various hot spot environ ments, and for various human health doseresponse unknowns identified in this review. in the nearterm, options likely to provide a firstpass assessment of risks seem likely to be based on mcda approaches, which could be facilitated by bayesian network models. once these mra models gain more acceptance, they may facilitate scenario testing to deter mine which control points may be most effec tive in reducing risks and which riskdriving attributes should be specifically considered and minimized during the development of novel anti biotics. megacities as sources for pathogenic bacteria in rivers and their fate downstream quantitative risk assessment from farm to fork and beyond: a global bayesian approach concerning food-borne diseases functional metagenomics reveals diverse betalactamases in a remote alaskan soil antibiotic resistance and its cost: is it possible to reverse resistance? persistence of antibiotic resistance in bacterial populations positive and negative selection towards tetracycline resistance genes in manure treatment lagoons potential mechanisms of increased disease in humans from antimicrobial resistance in food animals cu exposure under field conditions coselects for antibiotic resistance as determined by a novel cultivationindependent bacterial community tolerance assay second messenger signalling governs escherichia coli biofilm induction upon ribosomal stress quantification of genes encoding resistance to aminoglycosides, beta-lactams and tetra cyclines in wastewater environments by real-time pcr transport and fate of microbial pathogens in agricultural settings environmental health perspectives community tolerance to sulfadiazine in soil hotspots amended with artificial root exudates multiresistance, beta-lactamase-encoding genes and bacterial diversity in hospital wastewater in rio de janeiro, brazil fate and transport of antibiotic residues and antibiotic resistance genes following land application of manure waste differentiating anthropogenic impacts on args in the pearl river estuary by using suitable gene indicators class 1 integrons, selected virulence genes, and antibiotic resistance in escherichia coli isolates from the minjiang river the modified swat model for predicting fecal coliforms in the wachusett reservoir watershed, usa principles and guidelines for the conduct of microbiological risk assessment. cac/ gl-30 guidelines for risk analysis of foodborne antimicrobial resistance the role of drinking water in the transmission of antimicrobial-resistant e. coli preliminary risk assessment database and risk ranking of pharmaceuticals in the environment persistence of transferable extended-spectrum-β-lactamase resistance in the absence of antibiotic pressure the soil resistome: the anthropogenic, the native, and the unknown broad dissemination of plasmidmediated quinolone resistance genes in sediments of two urban coastal wetlands antibiotic resistance is ancient esbl-positive enterobacteria isolates in drinking water mortality and hospital stay associated with resistant staphylococcus aureus and escherichia coli bacteremia: estimating the burden of antibiotic resistance in europe ctx-m-15-producing escherichia coli clone b2-o25b-st131 and klebsiella spp. isolates in municipal wastewater treatment plant effluents comparison of different uncertainty techniques in urban stormwater quantity and quality modelling short-term and medium-term clinical outcomes of quinolone-resistant campylobacter infection campylobacter source attribution by exposure management choices, choices: the application of multi-criteria decision analysis to a food safety decision-making problem modeling of variations in watershed pathogen concentrations for risk management and load estimations contamination of surface, ground, and drinking water from pharmaceutical production the scourge of antibiotic resistance: the important role of the environment united nations/world health organization/ world organisation for animal health the shared antibiotic resistome of soil bacteria and human pathogens correlation of tetra cycline and sulfonamide antibiotics with corresponding resistance genes and resistant bacteria in a conventional municipal waste water treatment plant impacts of anthropogenic activity on the ecology of class 1 integrons and integron-associated genes in the environment risk profile on antimicrobial resistance transmissible from food animals to humans. rivm rapport 330334001. bilhoven:national institute for public health and the environment (rivm) are humans increasing bacterial evolvability? a framework for global surveillance of antibiotic resistance the sos response controls integron recombination selection of resistant bacteria at very low antibiotic concentrations quantitative microbial risk assessment distribution of antibiotic resistance in urban watershed in japan quinolone resistance is associated with increased risk of invasive illness or death during infection with salmonella serotype typhimurium adverse health events associated with antimicrobial drug resistance in campylobacter species: a registry-based cohort study metaanalysis of experimental data concerning antimicrobial resistance gene transfer rates during conjugation a conceptual framework to assess the risks of human disease following exposure to pathogens bla ndm-1 -positive klebsiella pneumoniae from environment influence of air quality on the composition of microbial pathogens in fresh rainwater antimicrobial resistance in the environment antibiotic resistance gene abundances correlate with metal and geochemical conditions in archived scottish soils pyrosequencing of antibioticcontaminated river sediments reveals high levels of resistance and gene transfer elements antibiotics in the aquatic environment-a review-part i antibiotics in the aquatic environment-a review-part ii effluent from drug manufactures contains extremely high levels of pharmaceuticals a multiresistance megaplasmid plg1 bearing a hyl efm genomic island in hospital enterococcus faecium isolates antibacterial resistance worldwide: causes, challenges and responses antibioticresistance profile in environmental bacteria isolated from penicillin production wastewater treatment plant and the receiving river antibiotic resistance characteristics of environmental bacteria from an oxytetracycline production wastewater treatment plant and the receiving river occurrence of chloramphenicol-resistance genes as environmental pollutants from swine feedlots multiple-criteria decision analysis reveals high stakeholder preference to remove pharmaceuticals from hospital wastewater from comparative risk assessment to multi-criteria decision analysis and adaptive management: recent developments and applications identifying associations in escherichia coli antimicrobial resistance patterns using additive bayesian networks quantitative human health risk assessments of antimicrobial use in animals and selection of resistance: a review of publicly available reports risk assessment of pseudomonas aeruginosa in water development of a neural-based forecasting tool to classify recreational water quality using fecal indicator organisms modified failure mode and effects analysis using approximate reasoning assessing the differences in public health impact of salmonella subtypes using a bayesian microbial subtyping approach for source attribution visual evidence of horizontal gene transfer between plants and bacteria in the phytosphere of transplastomic tobacco management options for reducing the release of antibiotics and antibiotic resistance genes to the environment nanoalumina promotes the horizontal transfer of multiresistance genes mediated by plasmids across genera a risk assessment framework for the evaluation of skin infections and the potential impact of antibacterial soap washing a multifactorial risk prioritization framework for foodborne pathogens assessing pathogen risk to swimmers at non-sewage impacted recreational beaches an in-premise model for legionella exposure during showering events detection of antibiotic-resistant bacteria and their resistance genes in wastewater, surface water, and drinking water biofilms integration of bioavailability, ecology and ecotoxicology by three lines of evidence into ecological risk indexes for contaminated soil assessment prevalence of antibiotic resistance genes in the bacterial flora of integrated fish farming environments of pakistan and tanzania metagenomic insights into chlorination effects on microbial antibiotic resistance in drinking water characterisation of potential virulence markers in pseudomonas aeruginosa isolated from drinking water antimicrobial resistance: a microbial risk assessment perspective studying plasmid horizontal transfer in situ: a critical review resistance to phenicol compounds following adaptation to quaternary ammonium compounds in escherichia coli wildlife trade and the emergence of infectious diseases aquatic systems: maintaining, mixing and mobilising antimicrobial resistance? morbidity of infections caused by antimicrobial-resistant bacteria environmental protection agency). 2012. human health risk assessment microbial risk assessment guideline: pathogenic microorganisms with focus on food and water. epa/100/j-12/001 diversity and antibiotic resistance patterns of sphingomonadaceae isolates from drinking water studies to evaluate the safety of residues of veterinary drugs in human food: general approach to establish a microbiological adi. vich gl36(r) re-analysis of the risks attributed to ciprofloxacin-resistant campylobacter jejuni infections water safety plan manual: step-by-step risk management for drinking-water suppliers. geneva:world health organization 2012a. report of the 3rd meeting of the who advisory group on integrated surveillance of antimicrobial resistance the evolving threat of antimicrobial resistance: options for action. geneva:world health organization antimicrobial resistance. fact sheet no. 194 framework for microbial food-safety risk assessments amenable to bayesian modeling ndm-1 and the role of travel in its dissemination diverse and abundant antibiotic resistance genes in chinese swine farms key: cord-018001-ris02bff authors: garrido, guillermo; dhillon, gundeep s. title: medical course and complications after lung transplantation date: 2018-06-23 journal: psychosocial care of end-stage organ disease and transplant patients doi: 10.1007/978-3-319-94914-7_26 sha: doc_id: 18001 cord_uid: ris02bff lung transplant prolongs life and improves quality of life in patients with end-stage lung disease. however, survival of lung transplant recipients is shorter compared to patients with other solid organ transplants, due to many unique features of the lung allograft. patients can develop a multitude of noninfectious (e.g., primary graft dysfunction, pulmonary embolism, rejection, acute and chronic, renal insufficiency, malignancies) and infectious (i.e., bacterial, fungal, and viral) complications and require complex multidisciplinary care. this chapter discusses medical course and complications that patients might experience after lung transplantation. the lungs normally have a dual blood supply, consisting of 1) large pulmonary arteries that provide desaturated blood under low pressure for alveolar gas exchange and 2) smaller bronchial arteries that provide oxygenated blood under systemic pressure for nutrition and oxygenation of the bronchi and lung tissue. as the only solid organ transplant that does not undergo primary systemic (i.e., bronchial) arterial revascularization at the time of surgery, lung transplants rely on the deoxygenated pulmonary arterial circulation and are especially vulnerable to the effects of injury and ischemia [4] . it has been hypothesized that the absence of the bronchial system in the lung allograft increases susceptibility to microvascular injury and chronic airway ischemia, which may be implicated in the genesis of chronic rejection and other complications [5] . similarly, the native lymphatics and the neural supply to lung allografts are disrupted at the time of trans-plantation. the impact of these disruptions on lung transplant outcomes remains unclear, though it is possible that these changes lead to higher susceptibility to the development of pulmonary edema and infections, worse airway clearance, and ineffective cough [6] . lastly, the lung allografts have higher exposure to immunogenic compounds, as compared to other organs, by ventilation. the ongoing exposure to various inhaled injurious agents may also predispose lung allografts to develop chronic rejection. there is a vast array of complications from lung transplantation. broadly these complications can be divided into noninfectious and infectious complications and have been summarized in table 26 .1. these complications arise at different times in the postoperative period [7] . the understanding of timing of various complications post-lung transplant can lead to early recognition and management of these complications. epithelium, and alveolar macrophages. the interaction between these cells leads to release of cytokines, reactive oxygen intermediates, and proteolytic enzymes leading to graft dysfunction [9] . the severity of pgd falls along a spectrum, ranging from mild dysfunction to severe lung injury. pgd can affect 10-25% of transplanted patients, and the 30-day mortality can be as high as 50%. furthermore, severe pgd after lung transplantation has been associated with development of subsequent chronic rejection and graft dysfunction [10] . the management of pgd is largely supportive and includes lung-protective ventilation strategies (low tidal volume, high positive end-expiratory pressure), judicious fluid management, inhaled nitric oxide or other inhaled pulmonary vasodilators to improve oxygenation, and extracorporeal life support (ecls) for the most severe cases. re-transplantation is an option for highly selected cases, but it is generally not recommended due to suboptimal outcomes [11] . lung transplant recipients are at increased risk of vte. the risk factors include major surgery status, hypercoagulable state, high dose of corticosteroids, immobility, and indwelling vascular access. the reported incidences of pulmonary embolism (pe) and deep venous thrombosis (dvts) postlung transplantation are approximately 5-15% and 20-45%, respectively [12] . the pulmonary embolism in setting of limited pulmonary reserve due to pgd, postoperative atelectasis, and single-lung transplantation can have catastrophic consequences, thus underscoring the need for early and appropriate vte prophylaxis after lung transplantation [13] . the diagnosis can be made with computed tomography (ct) pulmonary angiography, ventilation-perfusion scan, or by documentation of dvt by doppler ultrasonography. the treatment is the same as for vtes in general, although the risk of postoperative bleeding needs to be weighed against the risk of pe. the choice of anticoagulant is based on kidney function, periprocedural reversibility of anticoagulant effect, and drug interactions, with unfractionated heparin, low-molecular-weight heparin, and/or warfarin being by far the most common agents used. in case of ongoing bleeding or high risk of bleeding, inferior vena cava filters can be used as a temporizing measure. inadvertent injury to various intrathoracic nerves during lung transplantation is a well-recognized and common complication. the most commonly affected structures are the phrenic and vagus nerves. the reported rates of phrenic nerve injury have ranged from 3% to 9% in lung transplant cases. this rate can be as high as 40% in combined heart-lung transplantation [14, 15] . diaphragmatic dysfunction as a consequence of phrenic nerve injury can present clinically with dyspnea, hypoventilation and hypercapnia, and hypoxemia or as difficult wean from the ventilator. diaphragmatic paralysis can lead to increased length of stay and ventilator dependence. diagnosis can be confirmed by documenting paradoxical movement of affected diaphragm during quiet and deep breathing, using fluoroscopy or ultrasound visualization. the vagal nerve injury post-lung transplantation can lead to gastroparesis with associated risk of gastroesophageal reflux (gerd) and aspiration events. these in turn can place lung allograft at risk for recurrent infections, bronchiectasis, and possibly chronic allograft dysfunction [16] [17] [18] . common symptoms of gastroparesis include early satiety, decreased appetite, abdominal pain, and bloating. a diagnosis is usually made by a nuclear medicine gastric emptying study. the potential management strategies include minimizing transit delaying medications (e.g., opioids), the use of pro-motility agents, placement of post-pyloric feeding tubes, botulinum toxin injection to the pylorus, and surgical fundoplication in conjunction with pyloroplasty [17] . the pleural complications in early post-lung transplantation period include pleural effusions, hemothorax, pneumothorax, empyema, chylothorax, and interpleural communication. these complications usually arise as a result of the pleural disruption from the surgery itself, though rejection and immunosuppressive regimens may also play a role. the risk factors for the development of pleural complications include previous thoracic surgery, pleural adhesions, and donor-recipient size mismatch [19, 20] . pleural effusions are extremely common in the early postlung transplant period. the reported incidence has been 100% in some series [19, 20] . all patients have chest tubes in place immediately post-operation to allow lung re-expansion, pleural air, and fluid drainage. the increased amount of pleural fluid post-lung transplantation is related to capillary leak due to allograft ischemia reperfusion, fluid overload, bleeding, and surgical interruption of allograft lymphatics at the time of explantation [19, 20] . late pleural effusions can be a consequence of infection, acute rejection, trapped lung physiology from pleural fibrosis, or malignancy [21, 22] . in general, all pleural effusions need to be evaluated to rule out complicated effusions such as hemothorax, empyema, and chylothorax. these entities have all been associated with negative patient outcomes and are treated with a range of medical and surgical procedures depending on the condition and severity. for example, a chylothorax might necessitate mechanical interruption of thoracic duct, or hemothorax may need thoracotomy for control of bleeding. pneumothoraxes are common after lung transplantation. they can result from donor-recipient size mismatch, bronchopleural fistulas that occur secondary to operative injury or bronchial anastomoses dehiscence, or as a consequence of transbronchial biopsies performed in the course of allograft evaluation. small and stable pneumothoraxes after lung transplantation can be managed by watchful waiting, though larger or symptomatic pneumothorax may require chest tube drainage. an inadequately drained, hemodynamically significant pneumothorax can be a medical emergency necessitating urgent drainage [23, 24] . in patients who have undergone sequential bilateral lung transplantation (bslt) or heart-lung transplantation (hlt), interpleural communication due to surgical severance of the pleural recesses that separate the left and right pleural spaces can develop. this entails that pleural issues in these patients must be managed aggressively as pneumothoraxes can be bilateral and life threatening, and empyema can spread quickly. vascular anastomotic complications can arise either early or late in the post-transplant course and can have very severe adverse consequences. pulmonary artery stenosis can be secondary to mechanical kinking, disruption, or narrowing of the anastomosis, sometimes due to the particulars of donor anatomy or due to thrombosis [25] . the clinical picture is usually consistent with pulmonary hypertension and right ventricular failure. diagnosis can be made through pulmonary angiography and can be managed with interventions such as balloon dilation and stent deployment. occasionally, patients may require surgery for definitive management of the stenosis. pulmonary vein occlusion post-lung transplantation is a rare but serious complication. the commonest cause of pulmonary vein occlusion is the development of thrombosis at the anastomotic junction of the pulmonary veins and the left atrium, though inadvertent narrowing or ligation of pulmonary veins has also been reported. the potential clinical consequences include hypoxic respiratory failure, pulmonary edema, and cardio-embolic events. this entity should be included in the differential diagnosis of a patient with acute pulmonary edema post-lung transplantation. diagnosis is usually made by transesophageal echocardiography or ct angiography [26, 27] . the airway complications after lung transplantation can be classified by time of occurrence. early anastomotic complications, usually within 1 month of transplantation, include infection, dehiscence, and necrosis at the anastomotic sites. later complications include bronchopleural, bronchovascular and bronchomediastinal fistulae, excessive granulation tissue, bronchomalacia, and airway stenosis. airway anastomotic complications do not seem to be associated with decreased survival; however, they do negatively impact quality of life and significantly increase healthcare resource utilization [28] . the risk factors for airway anastomotic complications include colonization with burkholderia cepacia and aspergillus fumigatus, pgd, acute rejection, prolonged mechanical ventilation, and sirolimus use prior to anastomotic healing [29, 30] . bronchial necrosis and dehiscence occur 1-2 weeks after transplant. they can present with dyspnea, difficulty weaning from the ventilator, persistent air leak on the water seal, pneumomediastinum, and subcutaneous emphysema and infection, with symptoms ranging from mild to severe. depending on the severity, management can range from observation and antibiotics to minimally invasive or surgical repair. bronchial stenosis is the narrowing of the airway lumen, usually at the site of the anastomosis. patients can present with wheezing, cough, post-obstructive pneumonias, decline in pulmonary function tests (pfts), and stridor. the bronchial narrowing can also present distal to the anastomosis causing lobar lobe collapse. this syndrome occurs 2-6 months post-transplant but can present as late as 12 months. treatment options include close monitoring, bronchial dilatation with or without stent placement, and re-transplantation [31] . allograft rejection is a major cause of morbidity and mortality post-lung transplantation. at least a third of patients are reported to have acute rejection in the first year after transplant. acute rejection in itself seldom leads to mortality, but it is a main risk factor for the development of chronic rejection. the chronic rejection of lung allograft is the major hurdle to long-term survival after transplantation. despite the use of potent and novel immunosuppressive regimens, the incidence of chronic rejection and long-term survival post-transplant has remained essentially unchanged over the last two decades [1, 32] . acute cellular rejection (acr) is the most common kind of acute lung transplant rejection and is mediated by t lymphocytes. symptoms and signs of acr include dyspnea, cough, fever, and hypoxia. high-grade rejection may be associated with respiratory failure. mild acr can be asymptomatic and frequently detected on surveillance pulmonary function testing and/or transbronchial biopsies. current imaging modalities are not diagnostic but may reveal useful findings such as infiltrates and ground-glass opacities [32, 33] . flexible bronchoscopy with transbronchial biopsies is the gold standard for diagnosis. histologically, acr is characterized by the presence of perivascular and/or peribronchiolar (grade b) lymphocytes in the absence of infectious etiologies [32, 34, 35] . risk factors for acr include the number of hla mismatches between donor and recipient, although it is unclear which specific hlas have more impact. other reported risk factors are age, with older patients having more rejection, immunosuppressive regimen used (tacrolimus regimens reject less), other genetic factors such as il-10 production, and documented gerd. acr has also been documented following infections with certain viruses, such as rhinovirus, parainfluenza virus, influenza virus, human metapneumovirus, coronavirus, and respiratory syncytial virus. the treatment for acr is not uniform, and high-quality randomized controlled trials are lacking. there is wide agreement that severe cases of acr must be treated, but there is variability among transplant centers on whether to treat milder cases. the mainstay of therapy is high-dose corticosteroids. in cases that are refractory or recurrent, usually the immunosuppressive regimen gets intensified or altered, and medications such as anti-thymocyte globulin (atg), antiinterleukin 2-receptor (il-2r) antagonists, muromonab-cd3 (okt3), and alemtuzumab (anti-cd52 monoclonal antibody), among others, can be used [36, 37] . antibody-mediated rejection (amr) is believed to be mediated by donor-specific antibodies (dsa) against human leukocyte antigens (hla) and other donor antigens. these antibodies may have been present in the recipient prior to transplant, although most appear to develop after transplantation. amr is described as the combination of the following: donor-specific anti-hla antibodies, evidence of complement deposition in allograft biopsies, histologic tissue injury, and clinical allograft dysfunction [38] . once the aforementioned antibodies bind their receptors in the graft, they are capable of binding complement, specifically c1q. this can trigger complement-mediated cell destruction and inflammation. the development of de novo anti-hla antibodies is associated with poor prognosis [39, 40] . the mainstay of amr management involves depletion and/or neutralization of anti-hla antibodies by plasma exchange or intravenous immunoglobulin (ivig), followed by rituximab infusion. rituximab is an anti-cd-20 chimeric antibody that targets b-cell function and can decrease production of antibodies. in cases of refractory amr, newer agents such as bortezomib (anti-proteasome 26s) and the anticomplement antibody eculizumab have been tried with limited success. successful clearance of anti-hla antibodies has been associated with decreased risk of development of chronic rejection following amr [32] . the term chronic lung allograft dysfunction (clad) encompasses pathologies that lead to chronic dysfunction of lung allograft. clad is predominantly a consequence of chronic rejection and is a major hurdle to long-term survival. the two major phenotypes of clad include (i) bronchiolitis obliterans syndrome (bos) and (ii) restrictive allograft syndrome (ras) [41, 42] . bos is the predominant form of clad and is the number one cause of death after 1 year of transplantation. it is reported to occur in up to 76% of lung transplant recipients at 10 years post-transplant, and it is a major cause of morbidity, negative impact in quality of life, and increased costs. bos is defined by a sustained (>3 weeks) decline in the forced expiratory volume in the first second of expiration (fev1); provided alternative causes of pulmonary dysfunction have been excluded. at the tissue level, the hallmark of bos is obliterative bronchiolitis (ob), which is an inflammatory/fibrotic process affecting the small non-cartilaginous airways (membranous and respiratory bronchioles) characterized by subepithelial fibrosis causing partial or complete luminal occlusion [43, 44] . risk factors include prior episodes of acute rejection, cytomegalovirus infection (cmv), community-acquired respiratory viruses (carv) infection, history of pgd, isolation of aspergillus fumigatus and pseudomonas aeruginosa, the presence of gerd, and other immune-mediated factors [44] . the diagnosis can be made conditionally without histopathology (bos) or definitively with histopathology (bo). transbronchial biopsy is an insensitive method for detecting bo, and the clinical use of bos is the favored method for diagnosis and monitoring. the treatment of bos is disappointing in terms of outcomes; often success is measured in slowing the decline or stabilizing it. beyond augmentation of immunosuppression, azithromycin, extracorporeal photopheresis, montelukast, methotrexate, aerosolized cyclosporine, alemtuzumab, and total lymphoid irradiation have been used with limited success [44, 45] . ras has been more recently described and occurs in less than a third of patients with clad. these patients present with predominant restriction, and the survival is worse as compared to patients with bos. the median survival postdiagnosis is 8 months. ct scan shows interstitial opacities, ground-glass opacities, upper lobe-dominant fibrosis, and honeycombing. the only identified risk factor for the development of ras is late-onset diffuse alveolar damage (dad), occurring later than 3 months after lung transplant. there is no proven treatment for this condition, and re-transplantation remains technically challenging [46, 47] . lung transplant and associated immunosuppression are an established risk factor for development of cancer [48] . the commonest malignancy post-lung transplant is the squamous cell cancer of the skin. the single-lung transplant recipients are at higher risk of development of lung cancer in their native lungs. this increased risk is in part related to the increased risk of cancer due to underlying disease (e.g., emphysema, idiopathic pulmonary fibrosis) [49, 50] . similarly, the transplant recipients with cystic fibrosis remain at an elevated risk for development of gastrointestinal malignancies [49] . it is imperative that transplant recipients adhere to age-appropriate health screening after transplant. additionally, all lung transplant recipients should undergo skin cancer screening annually. the risk is especially high for of viral infection associated malignancies such as lymphoma, kaposi sarcoma, and anogenital cancers [49] . post-transplant lymphoproliferative disorders (ptld) encompass an array of diseases involving clonal expansion of b lymphocytes, ranging from polyclonal benign disorders to aggressive malignant lymphomas. the reported incidence of non-hodgkin lymphoma post-lung transplant has been as high as 28 cases/100,000 person-years [49] . there is a significant association between ptld and epstein-barr virus (ebv) infection, especially in patients who acquire infection the novo after being transplanted. ptld is managed by reducing the intensity of immunosuppression if possible, with specific chemotherapy for more severe and refractory cases. hyperammonemia affects 1-4% of the lung transplant population; it is a rare but potentially fatal complication. it can be secondary to systemic infection with mycoplasma hominis and ureaplasma, which break down urea as an energy source, generating ammonia as a waste product. this likely represents a donor-derived infection and can respond to early appropriate antibiotic treatment [51] . postoperative liver dysfunction and urea-cycle enzyme deficiencies can also cause hyperammonemia. diabetes mellitus (dm) is common in lung transplant recipients, with 25-30% of patients developing it in the first year post-transplant and up to 40% at 5 years. the use of glucocorticoids, calcineurin inhibitors, obesity, and advanced age is a significant risk factor for the development of dm. the development of dm in lung transplant recipients is associated with decreased survival. a close and judicious glycemic control is indicated in this patient population [52, 53] . patients who undergo lung transplantation have multiple risk factors to develop acute kidney injury (aki) post-transplant, including decreased renal perfusion before, during, and/or after surgery, drug toxicities, and systemic infections. aki affects as many as 70% of patients with approximately 8% patients requiring renal replacement therapy (rrt). the postoperative renal failure necessitating the use of rrt is associated with increased risk of early mortality [54, 55] . by 3 years, 25% of surviving lung transplant recipients develop severe renal dysfunction (serum creatinine >2.5 mg/ dl), and that percentage rises to 40% at 10-year mark [1] . the risk factors for development of chronic kidney disease (ckd) include older age, dm, hypertension, smoking history, and use of nephrotoxic drugs. ckd is also associated with higher mortality in lung transplant recipients [56] . recipients of lung transplant are at risk for development of osteopenia and osteoporosis due to multiple factors such as malnutrition, immobility, chronic corticosteroid use, calcineurin inhibitor use (e.g., tacrolimus), and other comorbidities. the strategies to prevent and reverse bone losses after transplant need to be proactively implemented. treatment includes adequate supplementation of calcium, vitamin d, use of bisphosphonates, enhancing physical activity, and minimizing contributing medications, if possible [57, 58] . dyslipidemia is also very common in lung transplant recipients, as high as 59%, and it may be related to the aforementioned metabolic risk factors. treatment usually entails lifestyle modifications and cholesterol lowering medications. there are multiple cardiac complications after lung transplantation, both short and long term. atrial dysrhythmias are very frequent in the early postoperative period, likely related to stress of major surgery, catecholamine surge, medication side effects, and mechanical stresses related to vascular anastomoses. the reported incidence has been as high as 25-35% [59, 60] . these arrhythmias are usually managed with medications aimed at rate and rhythm control. hemodynamically significant and/or refractory arrhythmias may require electric cardioversion. atrial dysrhythmias are associated with increased length of hospital stay and increased mortality [59, 60] . over the long term, lung transplant recipients are at increased risk for developing coronary artery disease (cad). as they progress into long-term survival, these patients have cumulative impact from risk factors previously discussed in this chapter, namely, dm, dyslipidemia, ckd, hypertension, chronic corticosteroid use, and other immunosuppressive medication. these risk factors should be carefully managed to decrease the impact of cad and related complications, with a combination of lifestyle modifications and specific medical therapies [61] . lung transplant recipients experience a decrease in skeletal muscle strength and function, including respiratory and limb muscles. this is likely related to reduced activity postoperatively and deconditioning, corticosteroid-induced myopathy, critical illness-related weakness (neuropathy/myopathy), and in the case of the diaphragm, phrenic nerve injury. this issue seems to be consistent in lung transplant recipients and independent of pre-transplant diagnosis and surgery type. muscle weakness, deconditioning, and sarcopenia are associated with adverse outcomes and decrease in quality of life. aggressive rehabilitation is standard and important in the post-transplant care [62, 63] . lung transplant recipients are at an increased risk for acquiring infections due to the immunosuppressed state, constant environmental pathogen exposure, decreased cough reflex, impaired mucociliary clearance, and lymphatic disruption. infectious complications are responsible for about a quarter of post-transplant deaths [64] . pneumonias are the most significant bacterial infection in lung transplant recipients, and the highest risk is in the first 30 days post-transplant. in the early period, they are more likely to be caused by hospital-acquired organisms, which tend to be more virulent and more resistant to antibiotics. the patients with cystic fibrosis are frequently colonized by multidrug-resistant organisms and are at increased risk of pneumonia post-transplant. in later stages, community-acquired organisms become more prevalent. moreover, throughout the post-transplant period, the patients are susceptible to numerous opportunistic infections [65] . other commonly encountered bacterial infections in this patient population include pleural space infections, blood stream infections (bsis), and soft tissue infections. the bsis and empyema carry a high risk of morbidity and mortality [66, 67] . pseudomonas aeruginosa, burkholderia cepacia, staphylococcus aureus (including methicillin-resistant), and other gram-negative organisms are common causes of serious infections in post-lung transplant period. these organisms have high rates of antibiotic resistance and are associated with worse outcomes [68] [69] [70] . streptococcus pneumoniae is the most common cause of communityacquired pneumonia, and immunosuppressed patients have increased risk of disseminated infection [71] . clostridium difficile associated diarrhea is a major complication in hospitalized, immunosuppressed and debilitated patients and is associated with increased hospital length of stay and mortality [72] . molds are common fungal entities affecting lung allografts. aspergillus spp. are the most common and have a predilection for the respiratory tract [73] . lung transplants have the highest incidence of invasive aspergillosis among solid organ transplant recipients, and it is the most common invasive fungal infection in lung transplant. aspergillus is ubiquitous in the environment and is acquired by inhalation. there are three main described presentations: invasive pulmonary disease, tracheobronchial aspergillosis, and disseminated disease, all of which are associated with varying degrees of increased mortality. other implicated molds include fusarium, scedosporium, and mucormycosis. these infections are difficult to treat and are associated with poor clinical outcomes [73] . candida spp. are another common pathogen in lung transplant setting. oral candidiasis is the most common manifestation of this infection. however, candida infections can also manifest as candidemia, empyema, surgical wound infection, and disseminated disease. serious candida infections have been associated with increased mortality, though rates have been declining over time [74] . other fungal infections in this patient population include opportunistic infections, such as pneumocystis jiroveci and cryptococcus, as well as endemic fungi, such as histoplasma capsulatum, coccidioides immitis, and blastomyces dermatitidis [75, 76] . viral infections contribute to morbidity and mortality from acute infection and have been associated with an increased risk of rejection, chronic allograft dysfunction, lymphoproliferative and other neoplastic diseases, and other extra pulmonary organ damage [77] . cytomegalovirus (cmv) is the most significant viral infection occurring in solid organ transplant recipients and is the second most common infection, after bacterial pneumonia. cmv infection can range from latent infection, to asymptomatic viremia, to cmv disease manifested with clinical symptoms and end-organ involvement. severity of disease may range from mild to life threatening. when there is organ damage, affected organs can include the lungs, pancreas, intestines, retina, kidney, liver, and brain. cmv disease is associated with increased mortality [77, 78] . other notable dna viruses from the herpesviridae family include epstein-barr virus (ebv), which is associated with increased risk of ptld and other malignancies, herpes simplex virus (hsv) 1 and 2, varicella-zoster virus (vzv), and human herpesvirus 6, 7, and 8 [77] . community-acquired respiratory viruses, including influenza, are a major source of respiratory symptoms and mor-bidity after lung transplantation. these infections may also be associated with development of chronic allograft dysfunction [79] . currently, the median survival for all adult lung transplant recipients is 6 years [1] . bilateral lung recipients appear to have a better median survival compared to single-lung recipients (7 versus 4.5 years) [1] . overall lung transplantation confers clinically meaningful and statistically significant improvements in health-related quality of life (hrqol). greater than 80% of lung transplant recipients report no activity limitations [80] . the care of lung transplant recipients is multidisciplinary, labor intensive, and comprehensive. it includes management of immunosuppression regimen, opportunistic infection prophylaxis, prevention and management of various comorbidities, and complications. a typical medication regimen consists of three classes of immunosuppression drugs (i.e., calcineurin inhibitor, cell-cycle inhibitor, and corticosteroids), as well as opportunistic infection prophylaxis against pneumocystis jiroveci, other fungal infections, and cmv. in early postoperative period and after hospital discharge, the recipients are closely monitored in outpatient setting. typical clinic visits include thorough medication reconciliation, clinical exam, pulmonary function testing, chest radiographs, and laboratory examinations. the role of surveillance bronchoscopies with transbronchial biopsies in monitoring of lung allograft remains unclear. while lung transplantation improves survival and quality of life in patients with end-stage lung disease, it is associated with multitude of noninfectious and infectious complications. lung transplant recipients have one of the shortest survival rates among other solid organ recipients, due to some unique characteristics of the lung allograft, including its unique blood supply and risk for ischemia, disruption of the native lymphatics and the neural supply during the transplant surgery, and exposure to immunogenic entities via ventilation. among noninfectious complications, pgd, vte, and rejection are the most important ones. clad affects most patients long term and remains a significant clinical concern and contributor to early mortality in lung transplant recipients. lung transplant recipients are also at increased risk for a variety of malignancies, due to their underlying disease, comorbidities, and immunosuppressed status; thus they require vigilant monitoring and screening for cancer. infectious complications (i.e., bacterial, fungal, viral) are also important contributors to morbidity and mortality, with bacterial pneumonias and cmv most commonly seen. patients require multidisciplinary and intensive follow-up and aftercare, ongoing vigilance, early recognition and treatment, and open and frequent communication between recipients, caregivers, and healthcare team providers. the registry of the international society for heart and lung transplantation: thirtieth adult lung and heart-lung transplant report--2013; focus theme: age long-term health status and quality of life outcomes of lung transplant recipients the registry of the international society for heart and lung transplantation: thirty-fourth adult heart transplantation report-2017; focus theme: allograft ischemic time every allograft needs a silver lining lung transplant airway hypoxia: a diathesis to fibrosis? a critical role for airway microvessels in lung transplantation pulmonary complications of lung transplantation report of the ishlt working group on primary lung graft dysfunction, part i: definition and grading-a 2016 consensus group statement of the international society for heart and lung transplantation report of the ishlt working group on primary lung graft dysfunction part iii: mechanisms: a 2016 consensus group statement of the international society for heart and lung transplantation report of the international society for heart and lung transplantation working group on primary lung graft dysfunction, part ii: epidemiology, risk factors, and outcomesa 2016 consensus group statement of the international society for heart and lung transplantation report of the ishlt working group on primary lung graft dysfunction part iv: prevention and treatment: a 2016 consensus group statement of the international society for heart and lung transplantation venous thromboembolic complications of lung transplantation: a contemporary single-institution review pulmonary embolectomy after single-lung transplantation diaphragmatic paralysis: a complication of lung transplantation leuven lung transplant g. phrenic nerve dysfunction after heart-lung and lung transplantation post-surgical and obstructive gastroparesis gastroparesis is common after lung transplantation and may be ameliorated by botulinum toxin-a injection of the pylorus upper gastrointestinal dysmotility in heart-lung transplant recipients acute and chronic pleural complications in lung transplantation pleural space complications associated with lung transplantation pleural effusion from acute lung rejection mesothelioma after lung transplantation frequency and management of pneumothoraces in heart-lung transplant recipients shifting pneumothorax after heart-lung transplantation endovascular management of early lung transplant-related anastomotic pulmonary artery stenosis four-year prospective study of pulmonary venous thrombosis after lung transplantation pulmonary venous obstruction after lung transplantation. diagnostic advantages of transesophageal echocardiography primary graft dysfunction and other selected complications of lung transplantation: a single-center experience of 983 patients airway complications and management after lung transplantation: ischemia, dehiscence, and stenosis airway complications after lung transplantation: treatment and long-term outcome segmental nonanastomotic bronchial stenosis after lung transplantation acute cellular and antibody-mediated allograft rejection are symptom reports useful for differentiating between acute rejection and pulmonary infection after lung transplantation? heart lung the role of transbronchial lung biopsy in the treatment of lung transplant recipients. an analysis of 200 consecutive procedures revision of the 1996 working formulation for the standardization of nomenclature in the diagnosis of lung rejection acute allograft rejection: cellular and humoral processes transplant/immunology network of the american college of chest p. a survey of clinical practice of lung transplantation in north america antibody-mediated rejection of the lung: a consensus report of the international society for heart and lung transplantation acute antibody-mediated rejection after lung transplantation acute antibody-mediated rejection after lung transplantation chronic lung allograft dysfunction phenotypes and treatment update on chronic lung allograft dysfunction bronchiolitis obliterans syndrome: the achilles' heel of lung transplantation an international ishlt/ats/ers clinical practice guideline: diagnosis and management of bronchiolitis obliterans syndrome therapy options for chronic lung allograft dysfunction-bronchiolitis obliterans syndrome following first-line immunosuppressive strategies: a systematic review neutrophilic reversible allograft dysfunction (nrad) and restrictive allograft syndrome (ras) restrictive allograft syndrome (ras): a novel form of chronic lung allograft dysfunction comparison of the incidence of malignancy in recipients of different types of organ: a uk registry audit spectrum of cancer risk among us solid organ transplant recipients bronchogenic carcinoma complicating lung transplantation disseminated ureaplasma infection as a cause of fatal hyperammonemia in humans risk factors for development of new-onset diabetes mellitus after transplant in adult lung transplant recipients prevalence and predictors of diabetes after lung transplantation: a prospective, longitudinal study short-term and long-term outcomes of acute kidney injury after lung transplantation incidence and outcomes of acute kidney injury following orthotopic lung transplantation: a population-based cohort study chronic kidney disease after lung transplantation: incidence, risk factors, and treatment osteoporosis and fractures after solid organ transplantation: a nationwide population-based cohort study bone loss and fracture after lung transplantation contemporary analysis of incidence of post-operative atrial fibrillation, its predictors, and association with clinical outcomes in lung transplantation atrial arrhythmias after lung transplant: underlying mechanisms, risk factors, and prognosis new-onset cardiovascular risk factors in lung transplant recipients skeletal muscle force and functional exercise tolerance before and after lung transplantation: a cohort study maximal exercise capacity and peripheral skeletal muscle function following lung transplantation pneumonia after lung transplantation in the resitra cohort: a multicenter prospective study nocardia infections in solid organ transplantation significance of blood stream infection after lung transplantation: analysis in 176 consecutive patients empyema complicating successful lung transplantation multidrug-resistant gram-negative bacteria infections in solid organ transplantation the impact of pan-resistant bacterial pathogens on survival after lung transplantation in cystic fibrosis: results from a single large referral centre methicillinresistant, vancomycin-intermediate and vancomycin-resistant staphylococcus aureus infections in solid organ transplantation invasive pneumococcal infections in adult lung transplant recipients clostridium difficile in solid organ transplant recipients mold infections in lung transplant recipients fungal infections in lung transplant recipients endemic fungal infections in solid organ transplantation cryptococcus neoformans infection in organ transplant recipients: variables influencing clinical characteristics and outcome dna viral infections complicating lung transplantation cytomegalovirus and lung transplantation community-acquired respiratory viral infections in lung transplant recipients: a single season cohort study quality of life in lung transplantation key: cord-029226-eagbwk7j authors: williamson, brian title: beyond covid‐19 lockdown: a coasean approach with optionality date: 2020-06-29 journal: nan doi: 10.1111/ecaf.12414 sha: doc_id: 29226 cord_uid: eagbwk7j nan maintaining across-the-board restrictions is socially and economically costly, has adverse distributional impacts, and is poorly targeted in terms of protecting health and health care provision. instead a win-win 'coasean' social contract could be forged to protect older people and other atrisk groups coupled with freedom from lockdown for everyone else. the social contract could involve a period of support and extra payments to older age groups to commit to home quarantine, but with the possibility of opting out. younger cohorts would be given the option of taking greater risks in return for liberty, fraternity, and greater economic participation. by doing so they would benefit themselves, but also support society economically and through acquired immunity. a few countries, such as new zealand and australia, which acted early and had a limited number of covid-19 cases, may be able to eliminate covid-19 and reopen their societies with ongoing strict border controls. however, given that covid-19 transmission is well established in most countries, near-term options in other countries are limited to mitigation. in the longer term, immunity acquired via infection, or ideally a vaccine, offers the prospect of a solution. however, the timing and effectiveness of vaccines are uncertain, and this is not a nearterm option. in the near term, mitigation options include hygiene measures, physical distancing, testing and contact tracing to limit transmission, in addition to improved clinical care. however, physical distancing (especially via lockdown) is very costly in terms of liberty as well as economics. elimination strategies. a number of studies do not consider such differences, for example barro (2020) . first, unlike an influenza pandemic, which may involve waves over a period of months but typically evolves into less serious seasonal strains, covid-19 is expected to persist. influenza also has a lower reproduction number (r 0 ) than covid-19 (biggerstaff, cauchemez, reed, gambhir, & finelli, 2014) , so population immunity could be acquired at a lower level of infection. a short-term quarantine can therefore be effective in avoiding or limiting the impact of an influenza pandemic but not covid-19, though it buys valuable time. second, unlike influenza, covid-19 is very unusual in its markedly disproportionate risk of killing older people, in addition to those at risk due to chronic health conditions. in contrast, mortality during the 1918-19 influenza pandemic was u-shaped or w-shaped with age; that is, many younger people died in addition to older people (taubenberger, 2006) . this difference is relevant to the economic impact in terms of reduction in workforce participation due to deaths or fear of deaths. a further consideration is immunity. covid-19 infection produces immunity, but the longevity of such immunity and the extent of cross-immunity with other coronaviruses is unknown. the nature of immunity, which may differ from that for influenza, will impact the dynamics of the covid-19 infection (kissler, tedijanto, goldstein, grad, & lipsitch, 2020) . figure 1 shows the marked variation in the infection-fatality ratio by age group. the risk of death if infected in the 80-plus age group is around 250 times that for 20-29 year-olds; more broadly, the risk of death if infected in the under-60 age group is 0.145 per cent, versus 3.28 per cent in the 60-plus age group, a 23-fold difference. while younger people are at greatly reduced risk from covid-19, they are on the other hand likely to suffer some of the more severe impacts in terms of forgone education, employment, and social and longer-term opportunities from measures to increase physical distancing. economic harm, particularly unemployment, can in turn be expected to have an adverse impact on mental health in particular. the combination of low health risk for younger people from covid-19 with disproportionately high economic and social costs from the current policy response suggests that a more targeted policy response is desirable. given that the risk of dying from covid-19 is a sharply increasing function of age, two broad suggestions have been put forward in the uk: • extend the stay-at-home recommendation for those aged over 70 to those aged over 60 (osama, pankhania, & majeed, 2020) . • release the under-30s from lockdown (oswald & powdthavee, 2020) . however, while these proposals better reflect population risk, neither is sustainable or sufficient to restart the economy and protect the most vulnerable. a larger group than those under 30 need to be released, and some, but not all, of those aged 50-60 are at significant risk and account for a significant fraction of years of life lost. sustainable responses that can bridge the gap to longer-term solutions are required which do not involve the high costs of lockdown in terms of liberty, employment, education, income, and broader physical and mental health outcomes. this is a pressing challenge, in particular for those at the start of their adult lives. in this article, building on a blog post where the idea was first suggested (williamson & wilson, 2020) , what is proposed is a coasean social contract that recognises the reciprocal nature of the problem of mitigating the risk of harm to health, welfare, and the economy from the covid-19 pandemic. the bargain is 'coasean' in recognising that social costs (externality) can be reciprocalan idea developed by ronald coase, the nobel prize-winning economist. coase analysed the case of sparks from trains setting fire to crops where the train company could mitigate sparks while the farmer could avoid cultivation of crops close to railways; and an efficient bargain between the two could in principle be struck (coase, 1960) . externalities arise because individual actions have consequences for others in terms of infection (from contact), in terms of reduced population immunity (from avoiding contact), and in terms of the risk that health care resources are redirected to treat those with covid-19. a functioning economy is also required to support society, and economic harm can be expected to have health implications (case & deaton, 2015) . reciprocity arises because either those at higher risk can isolate themselves or those who might infect them but are at lower risk can be locked down to reduce the spread of covid-19. however, while some risk factors can be identified, individuals will have information about their own risk, risk preferences, and opportunity cost of lockdown that is not available to central authority but which should inform decisions about who is isolated. what is proposed is a combination of central design and individual decisionsa coasean social contractthat recognises the reciprocal nature of the problem and allows individuals to opt in or out of defined categories in return for receiving or forgoing social and financial support. reflecting different individual preferences, there would be optionality for those at low risk to isolate without payment (they are not contributing to the societal good of a build-up of population immunity), while those at higher risk could opt out of isolation but would forgo support and payment. research on individual preferences could be conducted to inform the choice of default thresholds and incentives for a coasean social contract. a further option would be to require those at risk who opt out of self-isolation to pay a riskadjusted health insurance surcharge reflecting the broader risk they pose in terms of health care costs and the risk of overloading the health care system (in contrast to the moral hazard involved with health insurance opt-in those choosing to pay an insurance surcharge to opt out of isolation might be at lower risk than average for their cohort). however, a surcharge for opting out may be considered inequitable. this approach may also have the benefit of permitting an additional feedback loop as the load on the health system evolves, namely by changing the eligibility cohort and/or by changing the payment in return for isolation or potentially holding an online auction to achieve a given level of additional opt-in. the ability to influence r 0 via modest distancing measures and to keep the growth in covid-19 cases manageable would also be enhanced by growth in the proportion of people with a degree of immunity (well short of herd immunity) as younger cohorts re-enter education and work and socialise. again, the covid-19 health risk for this group, while non-zero, is low. this approach may also have lower costs to the economy than turning off or on distancing measures for everyone as epidemic spread subsides or picks up again, since the ongoing uncertainty associated with such epidemic dynamics limits individuals' ability to plan and invest, and may make some businesses non-viable, for example in hospitality and tourism. while financial incentives could undermine incentives for voluntary sacrifice and compliance for behavioural reasons, they are also more tuneable. it can be difficult, for example, to communicate clearly to the public the changes in the detailed rules in relation to home quarantine and physical distancing, or potentially to maintain a high level of compliance while extending the period of compliance (briscese, lacetera, macis, & tonin, 2020) . centralised and decentralised responses to covid-19 can, and would, both play a part in mitigating overall harm under a coasean social contract. the proposed approach could substantially reduce the economic and social cost of the covid-19 policy response while limiting mortality and the risk of overloading the health-care system. the degree of heterogeneity in terms of risk and preferences across individuals in the age group most at risk may be very large. some of those at increased risk may be highly productive or simply value outside economic and social opportunities highly, others less so. some may have a diminished quality of life, and/or may have died in the near term irrespective of covid-19. others who are healthy may consider the increased risk of premature death to be a small price to pay in return for freedom. this is relevant to the trade-offs individuals might make and to a societal assessment of alternative policy options. for deaths involving covid-19 that occurred in march 2020 in the uk, there was at least one pre-existing condition in 91 per cent of cases (ons, 2020). neil ferguson (2020) , director of the mrc centre for global infectious disease analysis at imperial college london, considered that it might be that as many as half to two-thirds of those who had died from covid-19 in the uk early in 2020 would have died by the end of the year from other causes. however, a study of hospital cases (excluding care homes) found that stratifying by age and multimorbidity counts showed that average years of life remaining were rarely below three (hanlon et al., 2020) . infection-fatality ratios can be combined with expected years of life remaining from life tables (ons, 2019) to obtain expected years of life lost, conditional on catching covid-19. these estimates can also be adjusted based on estimated health-adjusted life years remaining. covid-19 infection, on the assumption that years of life lost are 50 per cent lower for those aged 50-69 and two-thirds lower for those aged 70-plus. those most at risk face an expected loss of around six months of life on the assumption of typical health, and around two months of life if the assumed impact of comorbidity is allowed for. compared with the reduced quality of life associated with lockdown, some might regard this risk as modest, though individuals tend to be risk-averse. individuals are likely to have very different attitudes to this uncertain prospect. there are, therefore, grounds for not only moving to an age-specific policy response to covid-19, but also moving from mandates to incentives given large variations in individual trade-offs and private information about such trade-offs. what is proposed is a shift to an agespecific set of policy defaults, but with optionality and incentives to allow individuals to make individual choices. developing an approach which recognises individual heterogeneity and the importance of private information and preferences to individual and socially efficient trade-offs is more likely to prove sustainable, since it more closely aligns with individual preferences and incorporates support and compensation for those bearing the greatest burden in terms of isolation. the approach is intended as a bridge to a time when population immunity develops, ideally via an effective vaccine. the novel social contract set out here could be explored further by governments who have pursued mitigation via physical distancing but find that population fatigue is limiting its effectiveness or that the economic and social cost for younger cohorts in particular is simply too high. the approach seeks to recognise both individual preferences and two particular social benefits. first, society as a whole would benefit from getting younger cohorts back into education, training and work; and from the immunity this group would build up. they should therefore be not only allowed to return to 'normal' life but encouraged to do so, via a reduction in financial support to pre-existing safety net levels. second, society as a whole would also benefit by encouraging those groups considered to be at high risk to stay at home for the medium term. compulsion may not be sustainable, may be regarded as discriminatory, and is not ideal as some individuals may have low risk or high productivity or simply prefer liberty alongside the risk from covid-19. a combination of support and financial incentives coupled with the option to opt out is preferable to compulsion. the goal of this possible 'third way' is not to minimise deaths per se, but to go beyond a health optimisation approach to a broader well-being-maximising one, taking account of individual preferences and trade-offs. the proposed approach places greater weight on individual choice coupled with incentives rather than mandates, in part because such an approach may be more likely to have legitimacy over an extended time frame than the prevailing lockdown approach. it also recognises that individuals have more information about their risks and preferences, and these will differ across individuals. it also has the benefit of representing a social contract which, in contrast to across-the-board restrictions, recognises the contribution everyone is making while improving intergenerational equity. the views expressed in this article are the author's own and not those of communications chambers, which has no collective view. non-pharmaceutical interventions and mortality in u.s. cities during the great influenza pandemic, 1918-1919. nber working paper 27049 estimates of the reproduction number for seasonal, pandemic, and zoonotic influenza: a systematic review of the literature. bmc infectious diseases compliance with covid-19 social-distancing measures in italy: the role of expectations and duration rising morbidity and mortality in midlife among white non-hispanic americans in the 21st century the problem of social cost science and technology committee. parliamentlive.tv covid-19 -exploring the implications of long-term condition type and extent of multimorbidity on years of life lost: a modelling study projecting the transmission dynamics of sars-cov-2 through the postpandemic period national life tables: uk. 25 september deaths involving covid-19, england and wales: deaths occurring in protecting older people from covid-19: should the united kingdom start at age 60? the case for releasing the young from lockdown: a briefing paper for policymakers the origin and virulence of the 1918 'spanish' influenza virus estimates of the severity of coronavirus disease 2019: a model-based analysis. the lancet might a 'coasean' social contract mitigate overall societal harm from covid-19? public health expert how to cite this article: williamson b. beyond covid-19 lockdown: a coasean approach with optionality key: cord-254436-89zf41xr authors: singer, professor donald rj title: health policy implications of the links between cardiovascular risk and covid-19 date: 2020-09-03 journal: health policy technol doi: 10.1016/j.hlpt.2020.09.001 sha: doc_id: 254436 cord_uid: 89zf41xr nan influenza by immunization now have reduced confidence in leaving home. they are therefore much less likely to be prepared to access community health services to receive an influenza immunization. an epidemic of simultaneous influenza and covid-19 is therefore a serious concern. this would result in higher morbidity and mortality in vulnerable people and greater pressure on acute medical services. approaches to improving outcomes of covid-19 include development of effective vaccines. in the meantime, public health measures are the mainstay for containing spread of infection with sars-cov-2, complemented by access to high quality supportive treatment and efforts to develop targeted approaches to reduce infection and disease severity in people at high risk of serious morbidity and death from covid-19. however, eight months since this new respiratory syndrome was first reported to international authorities, effective test and trace systems have not yet been internationally implemented, even across all well-developed healthcare systems. for example, in the uk, reporting of test results has fallen to below 50% within 24 hours and one in seven home testing kits are reported to fail to yield a result [3] . there are major global efforts underway to develop vaccines against covid-19, with 19 candidates as of 31 july 2020 entered into clinical studies, including phase 2 and 3 trials [4] . however, their short and long-term effectiveness and safety remain to be established. the usual questions for a new vaccine remain to be answered. will vaccines prevent covid-19 or at least improve prognosis from the infection? will groups at higher risk from covid-19 respond as well as the often healthier volunteers in clinical trials? the timeline also remains uncertain for widespread public protection if and when safe and effective vaccines become available. international networks for pharmacovigilance against adverse effects of covid-19 vaccines are needed, with for example utrecht university in the netherlands being commissioned by the european medicines agency as a hub for a europe-wide network [5] . people with comorbidities are more likely to be infected with sars-cov-2, especially those with hypertension, coronary heart disease, diabetes mellitus and obesity. they are also more likely to have worse outcomes from covid-19, with similar associations in reports for example from china, the usa and italy [6, 7] . people with cardiovascular risk factors or established cardiovascular disease also experience a high case-fatality rate from covid-19 [5, 6] . for example, hypertension was reported in 40% of patients who died [odds ratio for death, 3.05 (95% ci: 1.57-5.92)] in a meta-analysis of over 40,000 confirmed covid19 patients in china [6] . in the same report, cardiovascular disease [cvd] was associated with a 5-fold increase in risk of death from covid-19 [6] . although the elderly are at greater risk of infection and death, younger adults are also at risk, especially those who are obese [8] and/or from black and asian ethnic minorities [9] . a recent meta-analysis of almost 400,000 subjects [8] reported that patients with a bmi over 30 kg/m 2 were ~50% more likely to develop covid-19and for those with covid-19, over twice as likely to be admitted to hospital for treatment, ~75% more likely to be admitted to an intensive care unit and had a ~50% greater mortality than the less overweight. for patients from bame groups, a lower bmi threshold of over 25 kg/m 2 appeared associated with worse severity from covid-19. in addition to being at increased risk of covid-19, obese patients also appear less likely to respond effectively to the influenza immunization [10] . there are therefore concerns that obese people may also respond less well to immunization against sars-cov2. however, as an example of the global health challenge, despite international efforts, including sustainable development goals for health adopted by g20 countries [11] , obesity remains an international epidemic, despite its being recognized as a disease by many organisations [12] including by the american medical association since 2013, and the long-established role of obesity as a major contributor to serious disorders of the heart, brain and circulation, as well as many cancers, joint disease and poor mental health. the who estimates that obesity has tripled since 1975 and that by 2016 there were 650 million obese people globally (1.6 billion overweight) [13] . reasons why black and ethnic minorities (bame) are more at risk of infection with sars-cov2 and of worse outcomes from covid-19 are unclear [8] . for example, in one study in the uk, one third of patients admitted to icu due to covid-19 were from an ethnic minority [14] with similar reports from the usa. possible reasons include a higher prevalence in bame populations of cardiovascular risk factors e.g. hypertension, diabetes mellitus, insulin resistance and obesity, socioeconomic, cultural, or lifestyle factors and genetic predisposition. there may also be pathophysiological differences in susceptibility or response to infection due for example to increased prevalence of vitamin d deficiency. an increased inflammatory burden may also contribute to worse outcomes. ace-2 (angiotensin converting enzyme ii) is the key docking protein by which the covid-19 virus binds to cells [15] . this is also the key cell entry receptor used by the initial sars-cov [14] . ace-2 is mainly found on vascular endothelial cells, the renal tubular epithelium and the leydig cells of the testis. copies of the ace-2 protein are present in increased numbers in patients with risk factors for heart disease. ace-2 could thus be a therapeutic target in the treatment of covid-19. however enzymatic activity of ace2 controls activation of the renin-angiotensin-aldosterone system (raas), a current therapeutic target in cardiovascular and renal disease. there were concerns that common medicines such as ace inhibitors (acei) or angiotensin receptor blockers (arbs) used to treat hypertension or heart failure by inhibiting the renin-angiotensin system could adversely affect ace2 expression. however, studies to date in sars-cov-2-infected patients do not suggest that these raas modulators influence susceptibility to the infection or cause more severe covid-19. indeed, in a meta-analysis of almost 29,000 patients with covid-19, use of raas inhibitors for any conditions showed a trend to lower risk of death or critical events (odds ratio 0.67, 95% ci 0.43 to 1.03, p = 0.07). within the hypertensive cohort, treatment with acei or arbs was associated with one third less mortality from covid-19 (odds ratio 0.66, ci 0.46 to 0.96, p = 0.03) and a one third reduction in the combined end-point of death and critically severe outcomes (odds ratio 0.67, ci 0.50 to 0.91, p = 0.01) [16] . this was however an observational study and there is as yet no evidence as to whether adding an acei or arb to treatment would influence the outcome of covd-19. myocardial injury is found in >25% of critical cases of covid-19 and presents in 2 patterns: acute myocardial injury and dysfunction on presentation and delayed myocardial injury that develops as illness severity intensifies [5] . there are also potentially serious drug-cardiac disease interactions affecting patients with covid-19 and associated cardiovascular disease, for example from empirical anti-inflammatory treatments. [6] . sars-cov2 may also cause hypercoagulability, resulting in unexpectedly severe lung damage from widespread thromboses and disseminated intravascular coagulation adding to lung injury from covid-19 pneumonia [17] . these features suggest complement-mediated thrombotic microangiopathy as a contributory factor and may give clues to treatment beyond anticoagulation to prevent life-threatening microangiopathy [17, 18] . an indirect factor in covid-19-related increased severity of cardiovascular disease is malnutrition in patients self-isolating at home. this may directly increase risk of falls, heart attack and stroke, especially when patients continue diuretics and other blood pressure-lowering medicines despite reduced oral intake of food and drinka recognized cause of hypotension and falls. other indirect reasons for concern about increased prevalence and severity of cardiovascular disease because of the covid-19 pandemic include poorer recognition and control of cardiovascular risk factors and established serious disorders of the heart, brain and circulation due to reduced access to medical services. particularly in less developed countries, public transport is vital for access to health care facilities. both public transport services and medical facilities have been seriously limited during covid-19 restrictions and availability of funds to pay for medical services has been severely reduced. for example, in india, over 75% of the country's substantial workforce of 100 million migrant workers lost their jobs overnight, public transport services were critically reduced, and many healthcare facilities closed [19] . increasing recognition of these links between cardiovascular risk and disease and severity of covid-19, including mortality, offer opportunities to improve outcomes of covid-19 in the large number of patients with these common disorders. understanding the pathophysiology and exploring potential solutions and treatments to reverse worse outcomes in patients at increased cardiovascular risk is a priority for health researchers and clinical health services around the world. this is all the more pressing as there is an international epidemic of the preventable cardiovascular risk factors which have been linked to increased severity of covid-19. health policy makers also need to take steps to extend influenza immunization to all groups now recognized to be at risk of more serious covid-19, including the obese, others with increased cardiovascular risk and people from black and other at risk ethnic minorities. policy makers will need to make extra efforts to make sure that these vulnerable people take part in influenza immunization programmes. this requires measures to make sure that accessing points of care will not put people at risk of acquiring covid-19. policy makers also need to build public awareness of the current extra importance of influenza immunization and confidence in the safety of accessing medical services. the involvement of policy makers to ensure sustained financial and social solutions for covid-19 is urgently needed, to complement the efforts against covid-19 of health professionals, regulators and the pharmaceutical and biotechnology industries. these efforts will not be successful without also addressing the cardiovascular and other factors that contribute to higher risk from covid-19. links to the severity of covid-19 make it all the more pressing for policy makers and public health agencies to address underlying causes and to reduce the incidence and severity of preventable cardiovascular risk. situation updates. website for the european centre for disease prevention and control. accessed 27 th a pneumonia outbreak associated with a new coronavirus of probable bat origin uk coronavirus live: daily cases tally jumps by nearly 500 to reach 1,522. uk guardian news website vaccines and treatment of covid-19. european centre for disease prevention and control ema to monitor real world use of covid-19 treatments and vaccines covid-19 and cardiovascular disease: from basic mechanisms to clinical perspectives clinical determinants for fatality of 44,672 patients with covid-19. crit care individuals with obesity and covid-19: a global perspective on the epidemiology and biological relationships asian and minority ethnic groups in england are at increased risk of death from covid-19: indirect standardisation of nhs mortality data obesity impairs the adaptive immune response to influenza virus soft power and global health: the sustainable development goals (sdgs) era health agendas of the g7, g20 and brics. bmc public health regarding obesity as a disease: evolving policies and their implications is ethnicity linked to incidence or outcomes of covid-19? ace2 as a therapeutic target for covid-19; its role in infectious processes and regulation by modulators of the raas system effect of renin-angiotensin-aldosterone system inhibitors in patients with covid-19: a systematic review and meta-analysis of 28 covid-19 cytokine storm: the interplay between inflammation and coagulation emerging evidence of a covid-19 thrombotic syndrome has treatment implications the author has no conflict of interest to declare. he is the president of the fellowship of postgraduate medicine, for which health policy and technology is an official journal.during 2014 he was a physician and pharmacologist in rwanda within the us aid and us cdc human resources for health program. key: cord-017479-s4e47bwx authors: pulcini, elena title: spectators and victims: between denial and projection date: 2012-03-16 journal: care of the world doi: 10.1007/978-94-007-4482-0_6 sha: doc_id: 17479 cord_uid: s4e47bwx this chapter goes into the unproductive metamorphosis of fear, and analyses the defence mechanisms that it generates: namely denial and projection. in the case of global risks, fear provokes self-defensive strategies based on denial (in the face of the nuclear challenge) and self-deception (in the face of global warming); and, in the case of the threat of the other, projective and persecutory strategies based on reactivating the dynamic of the ‘scapegoat’. they are two contrasting but specular responses which, at the emotional level, reflect the divarication between (unlimited) individualism and (endogamous) communitarianism. the first, implosive response converts into an absence of fear, attested to above all by the figure of the global spectator, while the second, explosive response converts into an excess of fear (fear of the other, fear of contamination), fuelled by forms of reinventing community. these responses are defined as irrational since in the first case they inhibit the spectator’s capacity to recognize himself as also a potential victim of the threats, thus preventing his mobilization, and in the second case they give rise to dynamics of demonization-dehumanization of the other, which result in a spiral of violence and impede forms of solidarity. this case a subjective factor comes into play, linked to the capacity and manner of perceiving the threats. it is telling that sociology and psychology converge on the importance of this aspect, underlining the fact that the very characteristics of the risk have a de fi nite in fl uence on the way in which it is perceived. 4 ulrich beck had already stressed the fact that the often invisible nature of the global risks, the unforeseeability of their effects and the only potential character of the damage which they provoke mean that they are removed from our perception and require the intervention of a re fl exive attitude interpreting the new scenarios through a knowledge that is equal to the new challenges. but in reality the problem is more complex still, since rather than an absence, we are faced with processes that distort the perception and assessment of the risk, which affect both the emotional and the cognitive spheres, and above all how they interact together. among the approaches sensitive to this problem, the one which seems to dwell on it most is cognitive psychology. starting from the classic studies by chauncey starr and then fischoff and slovic, 5 and on the basis of the so-called psychometric paradigm, cognitive psychology has built complex cognitive maps aimed at providing as exhaustive a list as possible of the variables that in fl uence the subjective perception of risk. the conclusions that have emerged from this interpretative approach show, for example, that concern in the face of threats (whether they derive from particular activities, substances or technologies) grows in correspondence to certain characteristics, amongst which the involuntary nature of the risks, the impossibility of controlling them, their capacity to cause irreversible damage and their originating from an unknown source. but above all, the results stress the fact that individuals are subject to distorted assessments and judgements in relation to the risks they are exposed to. for example, they tend to overestimate threats publicized by the media even if they are infrequent; to consider dangers dealt with voluntarily as more acceptable compared to those to which we are subjected or which are completely unprecedented or not very familiar; to feel fear in the face of very vivid events (11 september 2001), at the same time being quite incapable of a historical memory that links these same events together. while the merit of this approach is that it accepts and recognizes the presence of the subjective aspect and the uncertainty factor in de fi ning the concept of risk, pointing out the presence of non-rational responses, its limits lie, however, in its still strongly assuming the notion of probability. 7 namely, it ignores what is instead underlined by mary douglas, that is, the social and institutional context and the symbolic-cultural factors that in fl uence the perception of the threats, 8 and reproposes the idea of an essentially individualistic and de-contextualized social actor based on an abstract idea of rationality. finally, what we are most interested in here, in part deriving from the latter aspect, the limit of this approach lies in its failure to account for the why , the deep reasons that pollute a correct perception and assessment of the risks. in this connection, based on the reassessment of the role of emotions that has greatly questioned the hegemonic paradigm of rationality 9 over the last few decades, some authors have underlined that cognitive and emotional factors have to go together in order to recognize the existence of a risk and to weigh up its possible consequences. 10 they have put forward the idea that the information that enters our cognitive system can only have an effective impact on our action if it succeeds in creating images laden with emotion in our psyche. in other words, this means that we can be perfectly aware of particular threats without this involving us emotionally. put differently, only if this converts into the capacity to 'feel', to react emotionally and imagine its possible effects can our knowledge of the risk be effectively said to be knowledge, and therefore produce apt mobilization. now, the problem with regard to global risks seems to be prompted, as günther anders had already perfectly grasped in his diagnosis of fear in the age of technology, by the very imbalance between knowing and feeling. this imbalance is none other than one of the many variants of the psychic split that characterizes the contemporary subject and that anders, as has already been hinted, calls the 'promethean gap'. with this expression, he alludes in general to the detachment between the faculties, fi rst of all between the power to do and the capacity to foresee, which characterizes contemporary homo faber , or rather homo faber who has become homo creator . paradoxically what corresponds to the immense human power to 115 6.1 global risks and absence of fear produce and create permitted by developments in technology is man's inability to imagine its consequences: the faculties have got further and further away from each other so now they can no longer see each other; as they cannot see each other, they no longer come into contact, they no longer do each other harm. in short: man as such no longer exists, there only exists he who acts or produces on one hand, and he who feels on the other; man as producer and man as feeling, and only these specialized fragments of men have a reality. 11 no more are our imagination and our emotions equal to our unlimited power; at this point man's soul is irreparably 'outdated' with respect to what he produces and his colossal performances. in short, no more can we keep up-to-date with our promethean productivity and with the world that we ourselves have built: we are about to build a world that we cannot keep up with, and, in order to "catch" it, demands are made that go way beyond our imagination, our emotions and our responsibility. 12 this 'schizophrenia', 13 which is where the fundamental pathology of our time resides, prompts the paradoxical and ambivalent combination of power and impotence, activity and passivity, knowledge and unawareness that exposes the contemporary prometheus not only to previously inconceivable risks, but, also and above all, to the impossibility to recognize their destructive potential. this pathological drift appears particularly evident in the risk par excellence of the age of technology, which undermines not only the quality of individuals' lives (like in the case of the possible effects of the biotechnologies), but humankind's very survival on the planet: namely, the risk produced by the creation of the nuclear bomb, which we can recognize as the fi rst effectively global challenge. 14 before the horror of hiroshima and the spectre of humankind's self-destruction anders says: we really have gained the omnipotence that we had been yearning for so long, with promethean spirit, albeit in a different form to what we hoped for. given that we possess the strength to prepare each other's end, we are the masters of the apocalypse. we are in fi nity . 15 but the inability of our imagination to be equal to our unlimited power makes the latter mortally dangerous and transforms us into potential victims of what we ourselves have built: we, the men of today, are the fi rst men to dominate the apocalypse, hence we are also the fi rst to be endlessly subject to its threat. we are the fi rst titans, hence we are also the fi rst dwarves or pygmies -or whatever else we care to call ourselves, we beings with our collective deadline -we are no longer mortal as individuals, but as a group; whose existence is exposed to annulment. 16 suf fi ce it to think that it is impossible to see the bomb simply a means; an impossibility generated by the fact that if someone used the bomb… the means would not be extinguished in the purpose, but, on the contrary, the effect of the presumed "means" would put an end to the purpose. and it would not be one effect, but an unforeseeable chain of effects, in which the end of our life would be but one link among the many. 17 the gap between the power to do and the power to foresee, therefore, gives rise to the paradoxical coexistence of omnipotence and vulnerability, which exposes future humankind and the whole of civilization to the risk of extinction, thereby con fi guring the apocalyptic scenario of a 'world without man'. 18 but the problem does not stop here. indeed, if men, even when faced with the loss of foresight and projectuality caused by their own action, were capable of recognizing the reality of the danger, a change of direction could be set in motion to restore their control over their future. or, to put it in terms that allow us to return to our theme, if people felt fear in the face of the spectre of self-destruction and the enormity of the risks ahead, they would probably manage to break that promethean spiral of unlimitedness and restore sense and purpose to their action. furthermore, this is the normative premise at the basis of hans jonas's whole line of argument in favour of an ethics of responsibility. he starts from a similar diagnosis to that of anders on the drifts of technological power and the threats, for the whole living world, produced by a ' fi nally unbound prometheus' to suggest what he de fi nes as a 'heuristics of fear', as the precondition for ethically responsible action. ' […] it is an anticipated distortion of man,' he says, 'that helps us to detect that in the normative conception of man which is to be preserved from that threat […] . we know the thing at stake only when we know that it is at stake .' 19 this means that only the fear of 'losing the world' can push us to responsibly take on the problem of how to preserve it. i shall come back to the nexus between fear and responsibility later on. 20 but the problem, which anders strongly underlines -showing, unlike jonas, its complex anthropological and psychic roots -is that today we are in the presence of the unavailability of fear ; in actual fact it is paradoxically absent, due to the additional and deeper manifestation of the promethean gap which is the imbalance between knowing and feeling . indeed, there is no one who does not know what the bomb is and who does not know its possible, catastrophic consequences, but, anders adds, 'most people indeed only "know" it: in the emptiest of manners'. 21 namely, this 17 22 this asynchrony, anders points out, is something that pertains to human nature as a matter of fact. in general, in itself this is not bad, since it only shows that feeling is slower to transform. however, so to speak, it degenerates into a pathology when the gap between the faculties becomes too wide, as is happening today. as a consequence, it breaks all bonds and communication between them, 23 and reduces contemporary men to the 'most dissociated, most disproportionate in themselves, most inhuman that have ever existed.' 24 therefore, it is here, in the inadequacy of our emotional resources with respect to our productive power, that the anthropological root of our 'blindness to the apocalypse' lies. 25 and this inadequacy, which is true for all the emotions in general, concerns fear fi rst of all. everyone, in however confused a manner and in spite of the minimization strategies implemented by those who produce it, realize that the bomb is not a pure means whose function ends in the ful fi lment of a purpose, but a monstrous ' unicum ' that, together with our lives and the lives of future generations, can put an end to all purposes tout court . 26 yet, surprisingly, there is no fear : if today we were to seek out fear ( angst ),* real fear in vienna paris, london, new yorkwhere the expression ' age of anxiety ' is very much in use -, the booty would be extremely modest. of course, we would fi nd the word 'fear', in swarms even, in whole reams of publications […] . because today fear has become a commodity; and these days everyone is talking about fear. but those talking out of fear these days are very few. 27 if we are to observe our present-day situation, we could even claim that the more fear becomes the subject of talk in the newspapers and mass media, the more it is withdrawn from emotional perception and is anaesthetized by the reassuring urgency of routine and day-to-day concerns. the anaesthetizing mechanism also works in a directly proportionate manner to the enormity of the risk and the stake at play. while it may be true that at best we 22 ibid., i, 269. 23 ibid., i, 267-68. 24 ibid., i, 271-72. 25 see ibid., part iv, i, 234ff. anders underlines its historical roots, such as trust in progress that prevents man from thinking of an 'end', and above all, the con fi guration at the anthropological level of what he de fi nes as the 'medial man', whose passive and conformist action ends up removing his ability to project himself into the future, together with all sense and purpose. see ibid., part 5, i, 276ff. 26 ibid., i, 254ff. * translator's note: anders only uses one term -angst -and does not distinguish between anxiety and fear. since, however, the meaning with which he uses the term angst coincides more with 'fear' in the acceptation put forward by elena pulcini, i have decided to translate it with 'fear' so as to distinguish it from 'anxiety'. 27 ibid., i, 264. are able to imagine our own death, but not that of tens or thousands of people, and that we may be able to destroy a whole city without batting an eyelid while not managing, however, to imagine the actual, terrible scenario of 'smoke, blood and ruins', it is inevitable that we are totally incapable of perceiving the destruction of all humankind 28 : 'before the thought of the apocalypse, the soul remains inert. the thought remains a word.' 29 even though today the end of humankind has entered the sphere of possibility and even though man himself is responsible for this, the psyche removes the thought of this possibility, thus preventing fear from arising. hence, we are illiterate in fear -'analphabeten der angst' -and 'if one had to seek a motto for our age, the most appropriate thing to call it would be "the era of the inability to feel fear"'. 30 anders's diagnosis concerning the anaesthetizing of fear and the imbalance between knowing and feeling seems to fi nd a perfect correspondence in that distinctive defence mechanism that freud de fi ned as 'denial of reality'. 31 more complex and subtle than repression ( verdrängung ), which indicates the operation with which the subject pushes particular representations linked to an instinct to the unconscious, and which for freud becomes a sort of prototype of defence mechanisms, denial ( verleugnung ) causes the self, despite rationally recognizing a painful and dif fi cult situation, to prevent this reaching the emotional sphere. 32 in other words, while repression is a defence against internal instinctual demands , denial is a defence against the claims of external reality , 33 which is rationally recognized, but not emotionally felt or participated. this converts into that distinctive ambivalence of 'knowing and not-knowing' which, as has recently been 28 ibid., i, 268-69. 29 ibid., i, 269. 30 in his recent sociological valuation of the concept of 'denial', stanley cohen stresses this ambivalence, pointing this out as the most interesting side of the concept, 36 and above all the most suited to accounting for a series of phenomena that characterize contemporary reality. explicitly drawing from psychoanalysis, whose worth he acknowledges -if nothing else against the reductive simpli fi cations of cognitive psychology 37 -as more than any other approach having grasped the elusive quality of the concept of denial, cohen offers a de fi nition that fi rst of all takes into account the meaning that is more general and common to the various forms: […] people, organizations, governments or whole societies are presented with information that is too disturbing, threatening or anomalous to be fully absorbed or openly acknowledged. the information is therefore somehow repressed, disavowed, pushed aside or reinterpreted. or else the information "registers" well enough, but its implications -cognitive, emotional or moral -are evaded, neutralized or rationalized away.' 38 on the basis of this premise, cohen analyses the many forms of denial. it can occur in good faith or be deliberate and intentional; it changes in relation to the subjects' different positions, that is, whether they are victims, guilty parties or witnesses; it depends on how the object is evaluated, which can be expressed through a simple refusal to acknowledge the facts, through a different interpretation or through a rationalization that aims to prevent its psychological, political and moral implications. but the most disconcerting and problematic form, since it can affect whole cultures -as is the case today -is what makes the subjects of the denial aware and unaware at the same time, that is, placed on the threshold between consciousness and unconsciousness. here they do have access to the reality, but in such a way as to ignore it since it is too frightening or painful, or simply too unpleasant to accept. 'we are vaguely aware,' cohen says, 'of choosing not to look at the facts, but not quite conscious of just what it is we are evading. we know, but at the same time we don't know.' 39 for example, much more than the intentional denial which is often 34 ibid, 22. 35 anders, die antiquiertheit des menschen , i, 269-70. 36 cohen, states of denial . 37 'the cognitive revolution of the last thirty years has removed all traces of freudian and other motivational theories. if you distort the external world, this means that your faculties of information processing and rational decision making are faulty.' (ibid., 42) . 38 ibid., 1. 39 ibid., 5. moreover, this is the core of the freudian concept, which evidently presupposes the idea of splitting the ego ( ichspaltung ): 'freud,' says cohen, 'was fascinated by the idea that awkward facts of life could be handled by simultaneous acceptance and disavowal. they are too threatening to confront, but impossible to ignore. the compromise solution is to deny and acknowledge them at the same time.' (ibid., 27) . implemented by political actors and institutional authorities to cover up regrettable facts and unpopular decisions, this is the frame of mind that most interests us and disturbs us because it can explain the widespread and paradoxical indifference with which common people react to situations of suffering, atrocities and violence. 40 tellingly, the focus of cohen's whole and documented analysis seems to be the fi gure of the 'passive bystander' who, when faced with other people's suffering (whether this is experienced in a direct manner like a rape or episode of bullying, or is distant like genocide or torture), defensively withdraws from all involvement, pretending not to see and not to know, inhibiting emotional reactions, minimizing the event's capacity or changing channel if the information is transmitted through mass media images. hence the bystander withdraws from facing up to painful and embarrassing situations and avoids all possible mobilization. therefore, cohen seems, quite rightly, to rediscover denial above all as a reaction of defence in the face of other people's suffering where this assumes such proportions as not to be acceptable by the psyche. as a consequence, he fi nds it to be the root of the emotional indifference that today seems to be permeating contemporary societies. nevertheless, as we have seen, anders's re fl ection allows us to grasp another aspect of denial that sharpens its paradoxical nature, since it concerns the tendency to ignore, wipe out or minimize something that not only concerns other people's destinies, but that threatens our own lives: like in the exemplary case of denying the global challenge par excellence , the nuclear risk. consistent with anders's diagnosis, a few decades ago the psychoanalysis of war had already re fl ected on the radical changes caused by the nuclear threat with respect to the traditional forms of war con fl ict, and hence explained, more or less indirectly, the psychic roots of this speci fi c case of denial. while underlining the abstract or phantasmal nature of the danger at the objective level -due to the invisibility and intangibility of nuclear weapons, the distance of the target, as well as the bureaucratic 'normality' of those who hold the actual decision-making powersome authors have singled out the unprecedented nature of the nuclear con fl ict in its split and autonomization from the individual's instinctual sphere. 41 that is, unlike traditional war, based on mobilizing aggressive instincts, nuclear war (its destructive potential) appears as a mechanical event, or rather, a psychologically unreal event, in which the 'enemy' himself, far from being the object of projective dynamics, becomes an inanimate abstraction with whom all emotional bonds are lost. 42 this sort of 'dehumanization' of war, which affects the relationship with the other and the relationship with oneself to the same extent, thereby producing its 40 'the grey areas between consciousness and unconsciousness are far more signi fi cant in explaining ordinary public responses to knowledge about atrocities and suffering' (ibid., 6). 41 'devitalization', 43 is at the root, together with the enormity of the risk and the impossibility to 'think the unthinkable', 44 of the denial of the danger, which immunizes individuals from emotional involvement, and, therefore, from true awareness. it is telling that, in addition to denial, martin wangh spoke of a 'narcissistic withdrawal', 45 as he alluded to the entropic and self-defensive strategy of individuals reduced to passive and indifferent 'spectators' of events. individuals who, with respect to events, preclude any form of effective reaction and thus inhibit the insurgence of fear at the outset. i will return to the 'spectator phenomenon' shortly. 46 as i have already hinted, this phenomenon is one of the most disturbing pathologies of contemporary individualism. first, however, it is interesting to dwell on one of the -so to speakmore active variants of denial, which consists not only of withdrawal from a reality that is uncomfortable or painful for the psyche, sheltering in a sort of emotional indifference, but of lying to ourselves in order to believe something that does not respond to our rational evaluations, but to our desires. this is self-deception , a defence mechanism that has tellingly been de fi ned as 'the most extreme form of the paradox of irrationality'. 47 without going into the (at times muddled) analytical controversies relating to a concept that is without doubt slippery and problematic, 48 we can, however, try to sum up the characteristics -shared by many authors -which prove fruitful in further extending the picture relating to the metamorphosis of fear in the global age. self-deception is what pushes individuals to form a belief that contrasts with the information and proof at their disposal, since their desires end up interfering with their vision of reality and cause them to act in a different way from what their rational judgement would suggest. in other words, it consists of believing something because one desires it to be true, 49 hence it converges, despite some differences, with 43 martin wangh speaks of 'dehumanization' and 'devitalization' (meant as the impoverishment of the ability to feel) in 'narcissism in our time: some psychoanalytic re fl ections on its genesis," psychoanalytic quarterly 52 (1983). 44 the allusion is to the text by herman kahn, thinking about the unthinkable (new york: horizon press, 1962). 45 martin wangh, "the nuclear threat: its impact on psychoanalytic conceptualizations," psychoanalytical inquiry , no. 6 (1986). 46 the expression ( zuschauer-phänomen ) is by martin wangh, "die herrschaft des thanatos," in zur psychoanalyse der nuklearen drohung. vorträge einer tagung der deutchsen gesellschaft für psychotherapie, psychosomatik und tiefenpsychologie , ed. carl nedelmann (göttingen: verlag für medizinische psychologie, 1985). 47 david pears, "the goals and strategies of self-deception," in the multiple self , ed. elster, 60; giovanni jervis, fondamenti di psicologia dinamica (milan: feltrinelli, 1993) and massimo marraffa, "il problema dell'autoinganno: una guida per il lettore," sistemi intelligenti , no. 3 (1999): 373-403. 48 '[…] self-deception,' davidson says, 'is a problem for philosophical psychology. for in thinking about self-deception, as in thinking about other forms of irrationality, we fi nd ourselves tempted by opposing thoughts.' (donald davidson, "deception and division," in the multiple self , ed. elster, 79). 49 ibid., 86. the dynamic of wishful thinking. 50 like denial, meant in its pure form, so to speak, self-deception implies ichspaltung , no matter what name may be given to what freud identi fi ed as the splitting of the ego. 51 finally, like denial, it is an ambivalent phenomenon since it acts in that threshold between consciousness and unconsciousness which, as cohen stresses in this case too, creates a paradoxical situation of knowing and not-knowing. 52 but while denial appears, as we have seen, effective in explaining the lack of perception and the anaesthetizing of fear in the face of the nuclear threat, selfdeception can prove pertinent in order to understand the complex emotional response that individuals give to the other global risk already brought up above: that is, the twofold environmental risk of global warming and the depletion of the ozone layer, which by no means seems to generate that mobilization of the whole of humankind which it would instead -urgently -require. 53 from this point of view, the recently proposed de fi nition of 'global risks in the making' or 'potentially global' risks, which tends to distinguish them from the global risk par excellence represented by nuclear power, 54 can prove to be extremely useful in explaining the however blurred difference in the subject's reaction and in further enlightening the phenomenology of fear. the inde fi nite nature that without doubt also pertains to the nuclear risk is greatly stressed here, due to the fact that global warming and depletion of the ozone layer have wider margins of uncertainty created by their inertial nature, the impossibility to measure and foresee their future development, and therefore to calculate with certainty, together with their possible effects, the last deadline for possible countermeasures. their ungraspable and invisible nature, further fuelled by the dif fi culty to point the fi nger of blame mean that, in spite of the alarming international reports on the climate and reliable scienti fi c forecasts on the devastating future damage, moreover given increasing mass media coverage, individuals mostly seem 50 in paradoxes of irrationality (in richard a. wollheim and james hopkins, eds., philosophical essays on freud , (cambridge: cambridge university press, 1982)), davidson upholds that in wishful thinking desire produces a belief without providing any proof in its favour, so that in this case the belief is evidently irrational. however, he underlines the differences between self-denial and wishful thinking: unlike the second, the fi rst requires the agent's intervention, that is, the agent has to 'do' something to change his way of seeing things; in the second the belief always takes the direction of positive effect, never of negative, while in the fi rst the thought that it triggers can be painful (see 'deception and division', 85ff.). 51 in this connection pears speaks of 'functional insulation', "goals and strategies of self-deception", 71; davidson speaks of 'boundaries': '[…] i postulate such a boundary somewhere between any (obviously) con fl icting beliefs. such boundaries are not discovered by introspection; they are conceptual aids to the coherent description of genuine irrationalities.' "deception and division", 91-92. on self-denial and splitting of the ego, see herbert fingarette, self-deception (london : henley-routledge, 1969) . 52 see cohen, states of denial , 37ff. 53 it is important to point out that the second problem (the one relating to the risk of ozone layer depletion) nevertheless found some solutions as of the montreal protocol in 1987, made possible due to the fact that they did not require costs or relinquishments in terms of economics or lifestyle. 54 d'andrea, "rischi ambientali globali e aporie della modernità". to fail to suitably perceive the phenomenon. instead, it is often shrugged off with detached irony towards the excessive catastrophism, with resigned declarations of impotence, or the expression of enlightened trust in the capacities of technology to repair the situation. 55 in other words, despite being rationally known and recognized, the risk does not produce such emotional involvement as to give rise to effective answers. at most it produces a widespread and generic feeling of anxiety which ends up imploding, sucked in by the much more real worries of everyday life. the causes of this paradoxical situation can be traced fi rst of all to within the same dynamic of fear of which, as i will recall, hobbes's diagnosis had grasped an essential aspect. namely, fear as a necessary and vital passion that allows us to respond to the immediate danger (of death) loses its ef fi cacy when the danger, and the damage it could cause, are shifted to the future, that is, when a time gap inserts itself between the present action (based on destructive passions) and its possible consequences. thus all certainty and inexorability are taken away from the evil, enabling individuals to imagine it as a remote and avoidable possibility, for which it makes no sense to mobilize themselves immediately. in other words, in this case, fear does not manage to overcome the passions of the present. hobbes's intuition is all the more valid in the case of global risks, whose possible damage is even more remote and does not concern current individuals, but future generations. that is, fear does not have the strength to change present action (and therefore the underlying desires and passions) when the damage that this action can cause is not an evil for ourselves but for 'others': anonymous, generic and distant in time. in short, by weakening fear, the future nature of the damage makes it easy for essentially selfpreserving and narcissistic individuals to deceive themselves as to the actual entity of the risk and therefore to minimize or deny the possible consequences. in this case, the aim is not so much for individuals to defend themselves emotionally from events that are too painful to bear (like in the case of nuclear con fl ict), but to carry on with a manner of acting that allows them to legitimize and satisfy their current desires, preserve their lifestyles and not lose consolidated privileges. to once again recall the pathologies of the global self, we could say that the acquisitive voracity of homo creator , orientated towards unlimited growth, combines with the parasitic bent of a consumer individual anchored solely to the present, to prevent access -through the cunning of self-deception -to a correct perception of the catastrophic effects of climate change, global warming, the greenhouse effect and depletion of the ozone layer. this appears all the more paradoxical where these effects start to be dramatically visible: tropicalization of the climate, deserti fi cation, destruction of the ecosystem, lethal viruses and infective diseases are no longer only remote possibilities but the disturbing proof of environmental risks. by now scattered all over the planet, 56 they affect whole geographical areas and populations, damaging the illusion of individuals and states' immunity more and more. indeed, despite not just abstract information and forecasts, but a more and more invasive state of affairs that is starting to concern them at close quarters, guaranteed and supported by the instrumental interests of local politics and the global economy, individuals prefer to deceive themselves in order not to pay the costs of relinquishing their current desires, assets and pleasures; further eased, in this self-defensive operation, by the morally innocent, innocuous and banally everyday nature of the action that produces the risks. 57 moreover, the absence of a 'productive' fear, inhibited by denial and self-deception, is not belied by the cyclical outbursts of panic and collective hysteria in the face of the sudden appearance of threats (as has always been the case, from chernobyl, to sars and bird fl u). on the contrary, the absence and the excess of fear are nothing but two sides of the same coin, 58 the two extreme and 'unproductive' manifestations of what i de fi ned as global fear . both denial and self-deception leave individuals in the passive position of spectators of events. thus they are enclosed in the immunitarian circuit of a self-defensive and self-preserving individualism which anaesthetizes fear and is incapable of converting into effective action, practice or political participation. alongside the two extroverted pathologies, so to speak, of unlimited individualism, represented by the insatiable voracity of the consumer individual and the omnipotence of homo creator , appears a third, paradoxically introverted con fi guration, namely a passive and impotent individual, who helplessly watches the destructive effects of his own action, over which he seems to have lost all capacity for orientation and control. against the loss of objective spaces of protection and security, increasingly eroded by the global diffusion of the risks, he seems to seek shelter, as i have already hinted , 59 in a sort of interior immunity , entrenching himself in the emotional indifference that is just one of the many manifestations of narcissism. in addition, the yearning for immunity becomes more tenacious and obstinate the more it is felt to be ineffective and illusory. thus a new condition is outlined, which to recall the metaphorical fi gures proposed by hans blumenberg in his shipwreck with spectator , 60 is neither the premodern and 'lucretian' condition of the spectator watching the shipwreck from a safe place, sheltered from the danger, nor the modern and 'pascalian' condition of being the actors of our own lives, ' être embarqués ', involved in the things of the world and ready to put ourselves at stake fi rst of all by recognizing the constitutive precariousness of the human condition and accepting the very risk of existence. while modernity had rati fi ed the decline of the spectator fi gure, and enhanced the moments of practice and action, involvement and commitment; and while late modernity had radicalized his condemnation by emphasizing the need to expose oneself to risk and accept the uncertainty and fl uidity of the human condition, 61 the global age seems to be objectively bringing the spectator up-to-date, which nevertheless coincides with a deep and disturbing change with respect to the fi gure of the lucretian wise man. the erosion of boundaries and disappearance of an 'elsewhere'redrawing global space, cancelling out the distinction between inside and out -is turning into the loss of free areas from where the shipwreck can be observed. at this point, due to the end of every real guarantee of immunity, deprived of the possibility of a safe harbour where he can feel sheltered from the world's dangers, the global self withdraws into the only space apparently able to protect him from events and threats that he is not able to deal with: namely, the wholly interior space of an emotional indifference, an anaesthetizing of emotions , generated by implementing sophisticated and for the most part unconscious defence mechanisms. in other words, the spectator fi gure is undergoing a process of interiorization , which replaces the spatial distance from the shipwreck and the contemplative safety of the lucretian subject with the apathetical extraneousness and obstinate blindness of he who refuses to recognize the very risk of the shipwreck, and encloses himself in the entropic space of an inert solitude. moreover, the spectator phenomenon seems to pervade the whole social structure, due to the spectacularization of reality that, as jacques debord had already masterfully diagnosed a few decades ago, deeply upsets the very nature of social relations. 62 by denouncing the erosion of the boundaries between real and virtual and the pervasive power of images (mass media images fi rst of all), and by diagnosing life's 'total colonization' by commodi fi cation processes and the indistinctive overlapping of true and false as the effects of the 'society of the spectacle', debord had indeed grasped the spectator fi gure as the symptom and symbol of a new form of alienation that invades the individual's whole relationship with the world. passiveness and submission to the totalitarianism of images, prioritization of appearance, loss of contact with one's desires and genuine needs, atomism and isolation are among the most evident and disturbing characteristics of the spectator-individual, who thus ends up losing all capacity to be involved and to grasp reality. in short, the emotional indifference in which individuals shelter in order to cancel out the awareness of the risks surrounding them, unconsciously implementing powerful defence mechanisms, seems to be a sort of inevitable outcome of a widespread anthropological condition. or rather, it seems to be the extreme form of a general tendency towards apathy and inertia, produced by a spectacular society that empties reality of its contents and thus deprives individuals of pathos and action. suf fi ce it to think of the de-realizing effects, with respect to the effective drama of events, produced by mass media images (for example the fi rst gulf war), 63 or the narcotizing addiction that they cause to dangers and catastrophes of all kinds (from tsunamis to sars). the images deprive events of the fl esh and blood of the experience and neutralize them in the aseptic and equalizing space of the screen. however, the problem today is no longer the subject's passivization and atomization alone, nor his a-pathetical detachment from reality: aspects which, moreover, sociological re fl ection on narcissism had already underlined some time ago, and to which the most recent and sagacious sociological diagnoses do not fail to draw attention. 64 the problem, as we have seen, regards above all the negation of reality and the possible destructive effects of this denial on the very survival of individuals and the whole of humankind. by withdrawing into the immunitarian space of a selfdefensive apathy, the global spectator performs a dangerously illusory operation which precludes the possibility to perceive and understand what the unprecedented risk of the global age is: namely, that he himself is the potential victim of events from which there is no shelter, or rather, from which there is no other possible shelter than active and universal mobilization. 65 while it may be true that the hallmark of global challenges is that they cross boundaries and no perimeter can be drawn around or circumscribe them, it is also true that everyone, in every corner of the planet, is always potentially exposed to their effects, that everyone is always potentially a victim of a shipwreck which, for the fi rst time, could affect and sweep away humankind and all living beings. by anaesthetizing fear, the denial (and self-deception) strategy paradoxically ends up betraying the very same purpose that it had been implemented for: namely, self-preservation. or rather, in order to pursue an entropic and defensive selfpreservation that preserves them from all emotional and active involvement, not only are individuals undermining the quality of their lives, but the very preservation of humankind and the world. 63 see antonio scurati, televisioni di guerra. il con fl itto del golfo come evento mediatico e il paradosso dello spettatore totale (verona: ombre corte, 2003), who observes how the increase in media exposure of the war phenomenon corresponds to a lesser ability, on the part of the spectator, to grasp its reality. as a result, on the part of the citizen there is less possibility to decide and act. in other words, the 'total visibility' offered by the television medium corresponds, in an only apparent paradox, to the blindness and impotence of the 'total spectator'. 64 this unwillingly nihilistic outcome could perhaps be interpreted as a radical and extreme manifestation of the immunitarian paradigm recognized as the very emblem of modernity, owing to which the preservation of life is paradoxically turned around into its negation. 66 however, what i would like to stress, to go back to anders's diagnosis, is the fact that -in this case at least -this worrying reversal originates in the pathologies of feeling and the denial of fear, which prevent individuals from recognizing their paradoxical condition of spectators and victims at the same time. denial, however, is just one of the unproductive metamorphoses of fear in the global age, and only one of the strategies that the global individual uses to contrast the anxious perception of new risks. denial sums up the individualistic and implosive response to the inde fi nite and unintentional threats produced by techno-economic globalization. in parallel to this there emerges, as i had mentioned, another defence strategy, which responds to what is perceived as the second, fundamental source of danger, essentially generated by economic-cultural globalization: that is, defence against the other . this strategy is specular to the fi rst since it converts more into an excess rather than an absence of fear, and i have suggested de fi ning it as communitarian and explosive . 67 it is based on reducing insecurity and inde fi niteness through the defence mechanism of projection : namely, the fear is displaced onto indirect and specious objects since these appear easier to de fi ne and identify. many of the ethnoreligious con fl icts that are traversing the planet can at least in part be traced back to this basic defence mechanism which converts inde fi nite anxiety into de fi nite fear. in this case too, we are dealing with a strategy that is anything but new since, as we will see, it results in the classic mechanism of building a 'scapegoat'. however, the novelty lies in the fact that, like in the denial strategy, this strategy seems to be resulting in substantial ineffectiveness . if, as suggested to us by rené girard's enlightening diagnosis, the fundamental goal of creating scapegoats has always been, since the origins of civilization, to keep check on and resolve violence in defence of a given community, today we are instead faced with an escalation in violence which attests to the substantial failure of the scapegoat dynamic. through a fascinating thesis that i can only brie fl y recall here, 68 girard claims that in truth this loss of effectiveness has distant roots, since it coincides with the end of the processes which made violence ritual and sacred, and with the revelation of the 66 see esposito, immunitas . 67 here there is a generic allusion to bauman's 'explosive communities' in liquid modernity . 68 of great use for the issues that follow is the essay by stefano tomelleri, "il capro espiatorio. la rivelazione cristiana e la modernità," studi perugini , no. 10 (2000): 147-57. victimage mechanism brought on by the advent of christianity. 69 in other words, while archaic societies had entrusted the rite of sacri fi cing the scapegoat with the function of providing a remedy to internal violence in order to found and preserve social order and peaceful coexistence among men, the revelation of christ radically damaged this mechanism since, by disclosing the victim's innocence, for the fi rst time it made people aware of the victimizing and persecutory dynamics. by unmasking the nexus between violence and the sacred, the christian message led to the breakdown of the mythical-ritual universe, and placed people before the unavoidable truth of their violence. thus it weakened the possibility of resolving the violence through the sacri fi cial mechanism and opened totally new scenarios, affected by a fundamental ambivalence. on one hand, by depriving men of all external justi fi cation for their violence, the christian revelation of the victim's innocence opened up the possibility of renouncing the scapegoat logic and resolving the problem of the social bond, without any exclusion or sacri fi ce; on the other hand, in the absence of ritual antidotes and their power to create order, it exposed men to the spreading of violence and the persistence -in more ambiguous, disguised and clandestine forms -of the victimage mechanism. that this second scenario is the one which, unfortunately, has ended up prevailing is manifestly undeniable; and, paradoxically, it can be pinpointed as originating above all in modernity. while it may be true that modernity -the time of rights, democracy and equality -seems to offer the possibility of transforming violence into 'soft', peaceful and even emancipatory forms of competition and rivalry, it is also true that, for the same reasons, it can provide a breeding ground which favours the heightening of violence. indeed modernity produces an ampli fi cation of the mimetic dynamic that girard recognized as the constitutive source of violent con fl ictuality among men. as has been underlined, the same equality that, à la tocqueville, can be interpreted as a loss of differences, frees the mimetic desire, which becomes unlimited 70 and inevitably exacerbates rivalry among people. in other words, in a society of equals the desire to be according to the other which pushes the mimetic actor to see the other as model and rival at the same time, triggers a spiral of competitive comparison. even the smallest difference becomes the opportunity for resentment, envy and hate, and can always provide the opportunity for violent clashes. while on one hand democratic indifferentiation and, we could add, narcissistic and postmodern intolerance towards every difference -which tocqueville had prophetically diagnosed 71 -provoke the continuance of rivalry and con fl ict, on the other hand the sacri fi cial dynamic, to which premodern societies had entrusted the function of keeping check on violence, seems to have lost its traditional ef fi cacy due to its irreversible disclosure. this means that modern and contemporary societies are exposed to a radical 'crisis of the sacri fi cial system' which, since it is impossible to fi nd a solution in the scapegoat mechanism, can result in a multiplication of violence and its manifestation in increasingly crude and destructive forms. 72 the loss of the victimage mechanism's ef fi cacy, due to the deritualization process, does not equate to its disappearance, however. on the contrary, girard once again observes that phenomena of 'sacri fi cial substitutions' reappear 'in a shameful, furtive, and clandestine manner' so as to avert moral condemnation (and selfcondemnation). 73 they take on the shape of psychological violence which is easier to conceal, or they re-explode in the exacerbated form of immolating victims to evil ideologies, as was the case of the genocides in the twentieth century. these mechanisms continue in our world usually as only a trace, but occasionally they can also reappear in forms more virulent than ever and on an enormous scale. an example is hitler's systematic destruction of european jews, and we see this also in all the other genocides and near genocides that occurred in the twentieth century. 74 of course the reference to the nazi genocide is not random, but extremely emblematic of the modern and contemporary reappearance of the victimage mechanism in spite of its disclosure. a fi rst formulation of this can be found in the diagnosis of totalitarianism that franz neumann was already suggesting in the 1950s, as he traced its psychic origins back to the transformation of fear into 'persecutory anxiety'. 75 every time, neumann says, over the course of history a particular social group (whether it can be de fi ned on the basis of class, religion or race) feels threatened by objective dangers which, together with material survival, compromise its prestige and identity, the deriving anxiety is displaced onto groups and people, who are given the requirements ad hoc , 76 and the guilt made to converge on them. if we are to take up the freudian distinction between 'realistic anxiety' and 'neurotic anxiety ', 77 neumann shows how fear and uncertainty are transformed into persecutory anxiety through the projective and hence specious creation of an enemy who becomes the subject of hate and aggression. as a result, the masses threatened with disintegration can rediscover their internal cohesion. in the case of nazism and the persecution of the jews, political and ideological manipulation linked up to this social dynamic, took advantage of the mass anxiety and pushed the masses towards 'caesaristic' and regressive identi fi cation 78 with a leader libidinally attributed the task of resolving the anxiety by expelling the evil and its presumed carriers. 79 by recognizing the victimage mechanism as originating in the persecutory transformation of anxiety, neumann allows us to see its emotional roots, which girard evidently considers less essential for his so-to-speak ontological diagnosis of violence. however, at the same time, while neumann particularly stresses the totalitarian outcomes of the scapegoat dynamic, 80 girard underlines its persistence in 'all the phenomena of nonritualized collective transference that we observe or believe we observe around us.' 81 although deritualized -and indeed all the more violent for this precise reason -the victimage mechanism continues to act in the same modern democratic societies in all creeping and disguised phenomena of exclusion and discrimination, or in the cyclical explosions of reciprocal aggression and disdain that are fuelled by identity con fl icts: we easily see now that scapegoats multiply wherever human groups seek to lock themselves into a given identity -communal, local, national, ideological, racial, religious, and so on. 82 evidently, here we are coming back to the topic of identity con fl ict which, as we have seen, is proliferating inside and outside the west, bringing the scapegoat strategy back up-to-date: a strategy which becomes all the more aggressive the more the perception of the threat grows in a global society. by eroding territorial and cultural boundaries, globalization is producing, fi rst of all in western societies, a disturbing proximity of the other. as a result, the other can increasingly be identi fi ed with the simmelian fi gure of the 'stranger within', who challenges the order and cohesion of a given community through a swarming and liminal presence that is felt, as suggested 78 neumann stresses the regressive nature of this identi fi cation mechanism for the very masses who implement it, since it involves alienation and the relinquishment of one's self: 'since the identi fi cation of the masses with the leader is an alienation of the individual member, identi fi cation always constitutes a regression'. ( anxiety and politics , 277). 79 'caesaristic identi fi cations may play a role in history when the situation of masses is objectively endangered, when the masses are incapable of understanding the historical process, and when the anxiety activated by the danger becomes neurotic persecutory (aggressive) anxiety through manipulation.' (ibid . , . 80 it is interesting to see how neumann indeed also alludes to the unconscious nature of the persecutory dynamic: 'hatred, resentment, dread, created by great upheavals, are concentrated on certain persons who are denounced as devilish conspirators. nothing would be more incorrect than to characterize the enemies as scapegoats […] for they appear as genuine enemies whom one must extirpate and not as substitutes whom one only needs to send into the wilderness.' (ibid., 279) . 81 girard, i see satan fall like lightning , 160. 82 ibid., 160. by mary douglas, as potentially contaminating. coming forth in response to the siege of a hybrid and unstemmable multitude that is penetrating the protected spaces of our identity citadels is the ancestral fear of a 'contamination' endangering the need for 'purity' upon which, douglas says, every culture and civilization builds its reassuring separations and classi fi cations. 83 the other (the stranger, he who is different, the migrant, the illegal immigrant) becomes the target upon whom to displace our fears, upon whom to project a persecutory anxiety that transforms him into the person responsible for the dangers threatening a society that is increasingly deprived of the traditional control structures. 84 hence this enables that blaming process which is indispensable for social cohesion and which, however, the anarchic and anonymous logic of globalization seems to be progressively eroding. 85 but since it is no longer possible to rely on ritual expulsion practices or strategies to con fi ne the other to a spatial and territorial elsewhere clearly divided by a de fi nite boundary that traces the separation between an inside and an outside, the exclusion mechanism becomes interiorized and acts at an eminently symbolic level. the exclusion dynamic, as has been underlined, is shifted into the conscience: 'defence and exclusion, no longer possible towards the outside, will be shifted into the conscience, the imagination, the social mythologies and into the self-evident that these hold up.' 86 thus immunity is ensured through dehumanization processes that transform the stranger within (the metoikos ) into an 'inside being' in such a way that he remains an 'outside being' all the same. 87 all this can take place in the insidious and hidden forms of psychological violence and everyday discrimination towards those who have crossed the territorial boundaries of a state and broken the taboo of distance and separation, therefore representing a constant challenge to consolidated privileges and to the 'purity' of identity. or it can occur through cyclical collective mobilization against the weak and marginalized in the attempt to deal with insecurity by displacing the fear onto problems of personal safety, which politics does not then hesitate to exploit, in selflegitimation, in the name of defending public order. 88 but, as we have already seen, it can also convert into a real and proper 'attack on the minorities', in which it is perhaps legitimate to recognize, as suggested by arjun appadurai, the distinctive form of violence spreading to the global level. when global insecurity is added to the delirious fantasy of national purity which appadurai de fi nes as an 'anxiety of incompleteness', the majorities in every single state whose hegemony is threatened tend to transform into 'predatory identities'. their aim becomes to defend the purity of the ethnos by eliminating the element of disturbance represented by the 'minor differences'. the minorities 'are embarrassments to any state-sponsored image of national purity and state fairness. they are thus scapegoats in the classical sense.' more speci fi cally in the global age they 'are the major site for displacing the anxieties of many states about their own minority or marginality (real or imagined) in a world of a few megastates, of unruly economic fl ows and compromised sovereignties.' 89 from iraq to ex-yugoslavia, from indonesia to chechnya, from palestine to rwanda, to the emblematic case of the clash between hindus and muslims within a modern democracy like india, the victimage mechanism seems to reassert itself with a fresh violence that tellingly -testimony to the obsession with purity at its origin -seems to repeat itself in particular towards the body . indeed, as appadurai underlines by taking on douglas's perspective, the body becomes subject to unheardof violations and atrocities (bodies massacred, decapitated, tortured, raped) in view of punishing the minorities for the fact that they 'blur the boundaries between "us" and "them," here and there, in and out, healthy and unhealthy, loyal and disloyal, needed but unwelcome.' 90 nevertheless, it is precisely this obsessive, punitive and puri fi catory nature that announces the danger that the violence may assume an unstemmable drift. 91 far from producing a stop to the violence, the scapegoat strategy causes its proliferation, through a sort of perverse up-the-ante that seems to bring the brutality of archaic practices, such as sacri fi ce, and of the starkest materiality back inside the abstract and impersonal space of globalization. 92 but that is not all. today the spiral of violence is further fuelled by a new factor that upsets the logic -to date essentially one-way -of the persecutors-victims relationship. what happens, unlike for example the emblematic case of nazism, is that the other tries to overturn his position as victim, and in turn becomes the persecutor, giving rise to a dynamic of hostility and aggression that potentially becomes unlimited owing to its reciprocal and specular nature. suf fi ce it to think of islamic terrorism and the projection it puts upon the west as the image of the other and evil, against which, by fuelling passions of resentment , 93 a compact and endogamous us is condensed together and built. indeed this proves the fact that the scapegoat, as girard warns, is not only necessarily embodied in the weak and oppressed but also in the rich and powerful. 94 hence, in the grip of dehumanization on one hand and demonization on the other, 95 the world becomes a theatre, through the reciprocal invention of an enemy, of an escalation in violence that has much to do with the persecutory metamorphosis of insecurity and anxiety and very little to do with a presumed 'clash of civilizations'. 96 shifted to the inner self, the victimage mechanism continues to act, hidden from view. nonetheless, it ultimately becomes ineffective since it fails in its original purpose to resolve the fear and keep a check on violence. orphaned of ritualization processes and deprived of an 'elsewhere' that permits the other's spatial and territorial exclusion, the construction of the enemy/victim generates forms of identity cohesion that are as aggressive as they are regressive, fuelled by a reciprocal persecutory projection. far from restoring cohesion and security to a given community, the scapegoat dynamic gives rise to endogamous and reciprocally exclusive processes of building an us , whose foremost and manifest effect is to form what i have de fi ned as immunitarian communities : 97 whether they are the 'voluntary ghettoes' and 'communities of fear' that explode cyclically in a west frightened by the siege of the other and anything but free from regressive phenomena, or ethno-religious communities entrenched around the obsession of identity and homogeneity, willing to reactivate atrocious forms of excluding the other, or lastly global communities that come together around the war/terrorism polarization. the metamorphosis of fear in the global age therefore seems to con fi rm, at the emotional level, the pathological split between an unlimited individualism and an endogamous communitarianism , which originates in the implementation of defence mechanisms leading to not only the polarization of an absence and excess of pathos , but also, it needs to be stressed, in their substantial inef fi cacy . on one hand, the denial of fear, we have seen, pushes individuals towards forms of apathy and narcissistic entropy that prevents them from recognizing the new risks produced by global challenges. as a consequence, this produces the individuals' incapacity to perceive their unprecedented condition of spectators and potential victims at the 94 same time, and fuels the illusion of immunity: which means that in the name of entropic self-preservation we end up delivering the whole of humankind to the danger of self-destruction. on the other hand, the persecutory conversion of fear generates perverse and endogamous forms of alliance and solidarity, which thereby result in the reactivation of destructive communities driven by 'primordial loyalties'. this gives rise to the explosive drift of identity con fl icts and to an unlimited escalation of violence at the planetary level. between self-obsession and us-obsession , as the specular polarities of the same immunitarian strategy, we run the risk of not grasping the chance 98 that the global age could actually be capable of offering through the very transformations that it produces and the very challenges that it contains. on one hand, as we will see, the risks that are bearing down on humankind for the fi rst time mean we can think of the latter as a new subject, as a set of individuals linked by their common vulnerability and weakness . therefore, they are able to take care of the world in the sense of the planet, the 'loss' of which would coincide with the disappearance of the only dwelling of living beings that we know of. on the other hand, the multiplication of differences and the slide of the idea of 'other' towards the notion of 'difference', which can neither be assimilated nor expelled into an elsewhere, for the fi rst time makes it possible to rethink the social bond as the solidaristic coexistence of a plurality of individuals, genders, cultures, races, religions, capable of forming a 'world', à la arendt, since they are capable of recognizing not only the necessity but also the potential vitality of reciprocal contamination . these real possibilities are, however, only a chance. insofar as it is a chance, the subjects have the task of knowing how to grasp it. to recall a successful suggestion by andré gorz, we could say that to pro fi t from the chance in the fi rst place means 'to learn to discern the unrealized opportunities which lie dormant in the recesses of the present' 99 ; or, in a word, to lay a wager on the ability to build alternative scenarios and create possibilities that may not yet have been taken up but are still latent. 98 the expression is inspired by georges bataille who, as already remembered above, proposes the idea of chance meant as the 'possibility of openness', see on nietzsche (london: athlone, 1992), originally published as "sur nietzsche," in oeuvres complètes, vol. 6 (paris: gallimard, 1976). 99 gorz, reclaiming work , 1. originally published as des choses cachées depuis la fondation du monde we haven't given up having scapegoats, but our belief in them is 90 percent spoiled. the phenomenon appears so morally base to us, so reprehensible, that when we catch ourselves "letting off steam" against someone innocent, we are ashamed of ourselves originally published as je vois satan tomber comme l'éclaire democratic and authoritarian state neumann says, there must always be a core of truth that makes this choice particularly dangerous: so in the case of the jews, the core of truth is given by their being 'concrete symbols of a so-called parasitical capitalism, through their position in commerce and fi nance on this see part ii purity and danger. an analysis of concepts of pollution and taboo mary douglas underlines that risk itself becomes a resource at the moral and political level and speaks of a 'forensic theory of danger': 'disasters that befoul the air and soil and poison the water are generally turned to political account: someone already unpopular is going to be blamed for it.' ( risk and blame , 5) risk and blame appadurai continues, 'are metaphors and reminders of the betrayal of the classical national project. and it is this betrayal -actually rooted in the failure of the nation-state to preserve its promise to be the guarantor of national sovereignty -that underwrites the worldwide impulse to extrude or to eliminate minorities originally published as orrorismo fear of small numbers , chap. 3 again underlines the nexus between the abstract logic of globalization and the brutality of physical violence 519-33; for an interesting treatment of the topic see stefano tomelleri key: cord-018328-t3ydu75l authors: shi, peijun title: hazards, disasters, and risks date: 2019-06-05 journal: disaster risk science doi: 10.1007/978-981-13-6689-5_1 sha: doc_id: 18328 cord_uid: t3ydu75l in this chapter, we will elaborate on three basic terms in the field of disaster risk science: hazards, disasters, and risks. we will also discuss the classification, indexes, temporal and spatial patterns, and some other fundamental scientific problems that are related to these three terms. atmospheric hazard: tropical cyclone, tornado, hail, snow, lightning and thunderstorm, long-term climatic change, and short-term climatic change. biophysical hazard: wildfire. space hazard: geomagnetic storm and extra impact events. the hazard groups proposed by joel c. gill et al. are almost equivalent to the hazard families of icsu-irdr classification except for two differences. one difference is that the meteorological and climatological families of icsu-irdr were combined into a single atmospheric group in gill's classification. the other difference is that the hazard group of shallow earth processes was added in gill's classification in order to emphasize the hazardous impacts of shallow earth changes (table 1.1). gill and malamud (2014) hazard group code definition component hazards (where applicable) geophysical earthquake eq the sudden release of stored elastic energy in the earth's lithosphere, caused by its abrupt movement or fracturing along zones of preexisting geological weakness and resulting in the generation of seismic waves ground shaking, ground rupture, and liquefaction tsunami ts the displacement of a significant volume of water, generating a series of waves with large wavelengths and low amplitudes. as the waves approach the shallow water, their amplitude increases through wave shoaling volcanic eruption vo the subterranean movement of magma and its eruption and ejection from volcanic systems together with associated tephra, ashes, and gases, under the influence of the confining pressure and superheated steam and gases gas and aerosol emission, ash and tephra ejection, pyroclastic and lava flows in the book regions of risk by hewitt (1997) , hazards were divided into the following categories: natural hazards include four types (meteorological, hydrological, geological and geomorphological, biological and disease hazards) technological hazards include hazardous materials, destructive processes, and hazardous designs. social violence hazards include weapons, crime, and organized violence. compound hazards include fog, dam failure, and gas explosion. complex disasters include famine, refugees, poisonous flood, nuclear wastes and explosion of nuclear power plants (table 1 .2). famine (drought + poor harvest + food hoarding + poverty) refugee crisis (famine + war) toxic floods (tailings dams + toxic waste + flood) harmful nuclear tests and power plant explosions (nuclear explosion and pollution + atmospheric circulation + rain and atomic dust + migration) another way to categorize hazards is based on the environment where hazards occur (also called disaster-formative environment). the classification based on causes emphasizes the origin of hazards, that is, whether the hazards are caused by natural factors, human factors, or the interaction between natural and human factors. in contrast, the classification based on disaster-formative environment lays stress on the environmental basis of hazards, especially the distinctions among different spheres of the earth, and relatively ignores the causes. actually, different kinds of hazards nowadays contain effects from both the natural and human factors to different degrees. and this is one of the important reasons why un changed the goal of the global disaster reduction activities from natural disaster reduction to disaster risk reduction. (1) classification of hazards by peijun shi in shi's paper (1991) published on the journal of nanjing university (natural sciences, special issue on natural hazards), hazards were divided into four levels: systems, groups, types, and kinds. this classification highlights not only the occurrence environment but also the causes of hazards (shi 1991) . the first level of this classification is focused on the causes, the second level the environments, the third level the types, and the fourth level the detailed hazards. the hazard system is composed of three systems: nature, human, and environment. the natural hazard system is then divided into four groups: atmosphere, lithosphere, hydrosphere, and biosphere. the hazards are mainly caused by natural environmental factors. the human hazard system includes three groups: technology, conflicts, and wars. the hazards are mainly caused by human environmental factors. the environmental hazard system is made up of five groups: global change, environmental pollution, desertification, vegetation degradation, and environmental diseases. the hazards are due to integrated natural and human factors. (2) classification of hazards in zhang lansheng and liu enzhen the atlas of natural hazards in china edited by zhang and liu, as a result of the cooperation between beijing normal university and the people's insurance company of china, was published by china science press (beijing) in 1992. based on the atlas, the paper a research on regional distribution of major natural hazards in china by wang et al. (1994) was published, and the classification system of major natural hazards in china consisting of types and subtypes (table 1. 3) was built. the major natural hazards in china can be divided into 5 environments, 31 types, and 108 subtypes based on the differences in disaster-formative environments. atmosphere including nine natural hazards-drought, typhoon, rainstorm, hailstorm, extreme low temperatures, frost, ice and snow, sandstorm, and dry-hot wind. hydrosphere including five natural hazards-flood, waterlogging, storm surge, sea wave, and tsunami. lithosphere including five natural hazards-earthquake, landslide, debris flow, subsidence, and wind-drift sand. biosphere including six natural hazards-crop diseases, crop pests, forest diseases and pests, rodents, poisonous weeds, and red tide. geosphere including six natural hazards-soil erosion, desertification, soil salinization, frozen soil, endemic disease, and environmental pollution. (1) intensity classification of single hazard the intensity classification of single hazard is based on the measurement specifications and standards of hazards. hazards of different origins and in different environments are measured by different indicators. for example, earthquakes are measured in magnitude, rainstorms in rainfall intensity, typhoons in maximum sustained wind, and floods in flood stage. those hazard measurement specifications and classification standards can be found on the web sites of international or national departments of measurement standards. generally speaking, meteorological departments set up the measurement specifications and classification standards for atmospheric hazards; hydrological or water resources, and oceanic administrations for hydrosphere hazards; geological and a large number of observations show that there is a negative correlation between hazard intensity and frequency. in other words, the higher the intensity is, the lower the occurring frequency is and the longer the repeating period is. there is a power function relationship between the hazard intensity and the occurring frequency (chen and shi 2013) . refer to textbooks or monographs on geoscience, life science and resources and environmental science for the intensity classification of single hazard. (2) intensity classification of multi-hazards the regional and integrated disaster risk research requires scientists to understand the diversity of hazards of different spatial and temporal scales and classify the intensities of multi-hazards. because the measurement indicators vary among different hazards and there is no universal indicator, the intensity classification method for single hazard mentioned in the previous section will not be able to meet the needs of the regional and comprehensive studies of the diversity of hazards. based on current data, it is very difficult to synthesize various hazard intensities measured in different indicators. one way to get around this problem is to divide each kind of hazard intensity into relative levels and then calculate the average of levels weighted by the area that respective type of hazard covers during a certain period of time. this method can approximately reflect the regional overall hazard intensities in a certain space and a certain period of time. but there is one problem with this method; that is, different hazards with the same level of relative intensity might have different impacts on hazard-affected bodies. therefore, in order to eliminate this effect, another term is added-the weighted average of the loss rate of each hazard in a certain space and time period. referring to the quadrat method in the vegetation investigation, we proposed to use multiple degree to describe the abundance of hazards in a region. another way to do this is similar to the multiple cropping index calculation in land-use research. based on wang et al.'s paper (1994) , in this book, we propose to use multiple degree and covering index of hazards to express the clustering degree and influence of multiple hazards in a region. multiple degree (h d ): the clustering degree of hazards in a certain region. as a relative value changing with the compared region, it can be expressed as where h d is the multiple degree of hazards in a region (%), n is the number of hazards in the region, and n is the number of hazards in a higher level of region (e.g., world, asia, china). the value of n is set to be 108 (table 1. 3) for the calculation of county-level multiple degree of natural hazards in china. relative intensity (h i ): the relative destructive or damaging ability of hazards. relative intensity is a relative value and only a quantity of the hazard per se. it is not an obvious positive correlation with the disaster loss or damage but is the basic reason (condition) for the regional loss. it can be calculated as follows: where h i is the relative intensity (level) of hazards in a region, p i is the relative intensity of hazard i, and s i is the area ratio of hazard i, ranging from 0.01 to 1.0, i.e., 1-100% and i is the number of hazard types. covering index of hazards (h c ): the percentage of covering area of hazards in a region. it can be expressed as where s i is the percentage of covering area of a type of hazard in a region and i is the number of hazard types. composite index (h): the sum of the three indexes mentioned above divided by the respective maximum values. the formula is where h d is the hazard multiple degree, h i is the relative intensity, h c is the covering index of a hazard in a region, and max () is the maximum value of the respective index. we will use the calculated results in wang et al.'s paper (1994) to demonstrate the practical application of the four indexes-multiple degree, relative intensity, covering index, and composite index of hazards. multiple degree of natural hazards. in fig. 1 .1, the maximum value of the natural hazard multiple degree is about eight times as large as the minimum value in china. the value ranges from below 0.04 to above 0.30. this large variation shows that there is an obvious spatial clustering feature of natural hazards in china. generally speaking, the high values are centered in north china and decrease toward northeast, northwest, and southeast. ninety percent of the districts and counties with h d value greater than 20% are located in the middle latitude belt (25°-45°n). in southwest china where the h d values are relatively low, the h d value increases in some topography-transition areas. thus, it can be seen that natural hazards relatively cluster in natural environment transition zones, such as the middle latitude belt, sea-land transition zones, topography-transition areas, and semiarid farming-pastoral ecotone. in the transition regions of several natural environments, there exist continuous areas with high h d values. north china is right in such location and thus becomes the most concentrated area of natural hazards in china, also an important part of the pacific rim and mid-latitude multiple hazard belt. therefore, regional natural hazards' factors are of important value to the degree of regional natural environmental change. covering index of natural hazards. figure 1 .2 shows that there is a large variation in covering index of natural hazards in china, ranging from less than 0.02 to more than 11.0 and indicating obvious regional differences. on the whole, the trapezoid region with qiqihar, harbin, tianshui, and hangzhou as four vertexes has the highest h c values (>8.0) in the country. in this high-value region, the northeast china plain and the north china plain have values usually greater than 9.0. the regions with h c values greater than 10 display a lambda-shaped layout; that is, one line is qiqihar-tongliao-beijing-taiyuan-baoji-tianshui, and the other line stretches from southern hebei province to hangzhou along the grand canal. the low-value regions are centered in the northern tibetan plateau, from which h c value increases outwards. in regions south of the yangtze river, there are two high-value belts: southeast coastal belt and southwestern provinces including yunnan, guizhou, and sichuan. there is a positive correlation between the h c value and the h d value. it can be seen from figs. 1.1 and 1.2 that h c values and h d relative intensity of natural hazards. figure 1 .3 shows that h i values are within the range of 0.8-24.0. regions with a h i value greater than 19.0 are sparsely distributed. one high-intensity area (h i > 16.0) stretches from the northeast to the southwest, and another one is in the southeastern side of the first one-hunan and jiangxi provinces. the relative intensities in the vast north-central tibetan plateau and northwest china are relatively low. the regional differentiation of relative intensity is tightly associated with the regional distribution of several major hazards. first of all, the seismically active belts of china, i.e., the pacific rim belt and himalayan seismic belt, have the correspondingly high intensities. seismic regions, once having an earthquake with a magnitude greater than 8, usually become small high-intensity centers, such as west china, tangshan. secondly, the high-intensity regions are overlapped with the regions concentrated with cloudbursts. for example, the coastal typhoon belt, the northern hebei mountains-taihang mountains-dabie mountains cloudburst belt, and the cloudburst belts in western sichuan and (shi 2011) western hunan. thirdly, the frequently flooded areas also correspond to high relative intensity areas. these areas include liaohe plain, north china plain, northern jiangsu plain, and hubei and hunan plains. finally, areas with frequent debris flows and landslides, mainly in the "second step" east to tibetan plateau, have high values of relative intensity. therefore, the overall relative intensity of natural hazards is controlled by several major natural hazards. however, these major natural hazards may also interact in the same region, which makes the regional differentiation of relative intensity of china's natural hazards more complicated and also expands the high-intensity regions. in every high relative intensity area, there is at least one dominant natural hazard. relationships among multiple degree, relative intensity, and covering index. the interaction of the three indexes varies among regions. figure 1 .4 shows the regional distribution of the composite index of natural hazards in china. north china has the highest values of all three indexes and thus is affected by frequent and catastrophic hazards. coastal areas have the second highest values of three indexes and are subject to frequent and severe hazards. the third highest value regions include the farming-pastoral ecotone in northern and western sichuan, yunnan, western guizhou, and southeastern tibetan plateau in the southwest. whereas, the northern tibet is a low-value region. the above outlines the basic regional differentiation of natural hazards in mainland china. (shi 2011) there are differences in natural hazards between eastern and western china and between southern and northern china. as for east-west differentiation, the values of multiple degree, relative intensity, and covering index are higher in the east and lower in the west. the high values in the east are centered in north china while the low values in the west are centered in northern tibet. as for north-south differentiation, the vast area within 25°to 45°n in the east has values of all three indexes higher than areas to its south and north. among this vast area, the highest values exist in range of 30°to 40°n. however, the north-south differentiation in the west is not obvious, since there are incomplete data records, especially in the border area among tibet, qinghai, and xinjiang, or the hoh xil region. due to inadequate data, this region has the lowest values of all three indexes nationwide. the regional differentiation of natural hazards is closely associated with the environments where hazards develop. the environmental evolution-sensitive zones usually have high multiple degree, high relative intensity, and high covering index, suffering frequent or severe hazards. however, a small number of ecologically vulnerable areas have low values of multiple degree and intensity. one outstanding example is eastern guizhou. in areas with harsh environment, such as the vast west china, the multiple degree and relative intensity are not necessarily high. therefore, there is no direct relationship between environmental conditions and the impacts of natural hazards. disasters are direct or indirect results of hazards. disaster impacts include human losses, property losses, resources and environmental destruction, ecological damages, disruption of social order, and threats to the normal functioning of lifelines and production lines. the classification of disasters is closely associated with hazards and disaster-affected bodies. in chinese literatures, "zaihai" is used to refer to both hazards and disasters. however, in western literatures, hazard and disaster are two terms used separately. most researches in the west are focused on the classification of hazards, rarely on the classification of disasters. whereas, in chinese literatures, the classification of disasters takes the place of the classifications of both hazards and disasters. this confusion of hazards with disasters, or the confusion of hazard science with disaster science (e.g., seismology substitutes for earthquake catastrophology, and rainstorm meteorology for rainstorm catastrophology), negatively affects the development of disaster risk science. with the development of human society, the types of disaster-affected bodies (exposure) have increased and the distribution of disaster-affected bodies has expanded. at the same time, human's ability of disaster prevention has also been improved. therefore, even the same hazard could induce varying degrees of disasters. when analyzing the disasters, people stress on the disaster-affected bodies; namely, focus on human's disaster prevention level, which is referred as the vulnerability, resilience, and adaptation of human beings to hazards in the western literatures. as mentioned above, in western research there is more of an emphasis on the classification of hazards than that of disasters. in chinese official documents or research literatures, the majority is the classification of disasters based on the causes and scales of disasters. the genetic classification of disasters according to the causes in chinese literatures is basically the same as that of hazards in western literatures. ( in the book introduction to catastrophology by ma (1998) , according to the causes, disasters can be divided into natural disasters and man-made disasters. natural disasters can be further categorized into natural disasters and man-made natural disasters, while man-made disasters are composed of man-made disasters and natural man-made disasters. when the management of disasters is taken into account at the same time, disasters can be divided into 5 classes and further 30 types. in the book, the author also clearly pointed out the administration departments in charge of each type of disaster (table 1. 4) . this classification is different from others in the classification of the disaster-formative environments, with an inclusion of ocean sphere instead of hydrosphere. another difference is that sources of flood and drought are attributed to the atmosphere in ma's classification (ma 1994) . besides, this classification is basically in accordance with the classifications in prc disaster reduction report (1993) and major natural disasters and disaster reduction in china (1994) (table 1.5). similar to the classification of introduction to catastrophology, in the book natural disasters by chen (2013) , based on the differences between the internal, external, and gravitational energy of the earth, natural disasters were divided into seven major categories: earthquakes, tsunamis, volcanos, meteorological disasters, floods, landslide and debris flow, and spatial disasters. this classification does not only reflect the holistic view of disasters but also emphasizes the timescales of disasters and environmental processes of the earth system. (2) classification of disasters in chinese state standards for the dual purposes of comprehensive prevention, reduction and relief of disasters and counting the losses and damages caused by natural disasters, experts organized by state disaster reduction center of ministry of civil affairs drew up the classification standards of natural disasters in china, in which the definition and code of each disaster are also given. in this classification, natural disasters in china are divided into 5 groups and 40 specific types, including 13 specific meteorological and hydrological disasters, 9 seismic and geological disasters, 6 ocean disasters, 7 biological disasters, and 5 eco-environmental disasters (table 1.6). natural disasters resulting from the abnormal or anomalous quantity, intensity, temporal and spatial distribution, and combination of meteorological and hydrological elements, causing adverse impacts on people's lives and properties, industrial and agricultural production, and ecological environment 010100 drought a deficiency in precipitation and/or a shortage in river runoff or other kinds of water resources, causing adverse impacts on people's life, industrial and agricultural production, and ecological environment 010200 flood an overflow of water from rivers or other water bodies onto land which is usually dry, caused by excessive rainfall, melting snow and ice, levee breach and storm surge, resulting in life losses, property losses, and disruption in social functioning 010300 typhoon a tropical cyclone that develops in a wide area over tropical or subtropical oceans, accompanied with heavy winds, rainstorm, storm surge and huge waves, bringing out damages to human lives and properties 010400 rainstorm rainstorm happens when the precipitation rate is more than 16 mm per hour, or more than 30 mm for 12 h or 50 mm for 24 h, causing damages to human lives and properties (continued) hail hail is a type of solid precipitation formed in thunderstorm clouds and controlled by strong convective weather, causing damages to human lives and properties and to crops and animals 010700 thunder disaster thunder disaster is an electric discharge, directly or indirectly striking human and animals, resulting in damages to human lives and properties 010800 low-temperature disaster intrusion of strong cold front or constant low temperatures, causing freezing injury and damages to crops, animals, human beings, and infrastructures, disrupting normal life and production 010900 snow and ice disaster due to snowfall, a wide area is covered with snow or affected by snowstorm, avalanche, frozen road, and other infrastructures. it severely disturbs the lives of human beings and animals and causes damages to traffic, power, and communication systems 011000 high-temperature disaster high temperatures cause harms to the health of animals, plants and human beings and damages to production and environment 011100 sandstorm a strong wind blows loose sand and dirt that later mix with air from a dry surface, causing damages to human lives and properties. horizontal visibility is usually less than 1 km 011200 fog a visible mass composed of cloud water droplets and ice crystals suspended in the air or near the earth's surface, causing damages to human lives and properties and especially harms to the traffic safety. horizontal visibility is usually less than 1 km 019900 other m&h disasters meteorological and hydrological disasters that are not mentioned above 020000 seismic and geological disasters natural disasters resulting from the sudden energy release or violent mass transport in the lithosphere of the earth or long-term accumulative geological changes, causing damages to human lives and properties and ecological environment 020100 earthquake the strong shaking of the earth's surface and the accompanying ground rupture, resulting from the sudden release of energy in the earth's crust. it causes damages to human lives, buildings and infrastructures, social functioning, and eco-environment (continued) the sudden occurrence of a violent discharge of the interior materials of the earth, causing direct damages to human lives and properties. the erupted material is referred to as lava. other impacts include pyroclastic flow, lava flow, volcanic gases and ashes, and eruption-induced debris flow, landslide, earthquake, and tsunami 020300 collapse the sudden fall of unstable materials occurring at the edge of a steep cliff, causing damages to human lives and properties 020400 landslide a slide of a large mass of dirt and rock down a slope under the action of gravity, causing damages to human lives and properties 020500 debris flow a special water flow, entraining objects such as fragmented rocks, muds, branches in the path, rapidly rushes down mountain valleys or slopes. it results from heavy rains, reservoir or pond breach, or a sudden melting of snow and ice, causing damages to human lives and properties 020600 surface collapse a surface depression due to abandoned mines or karst processes, causing damages to human lives and properties 020700 ground subsidence a large-area land subsidence due to excessive extraction of groundwater or gas and oil, causing damages to human lives and properties. it occurs in unconsolidated or semi-consolidated soil areas 020800 ground fracture a linear fissure on the ground surface cracking through the rocks or soils, causing damages to human lives and properties 029900 other geological disasters geological disasters that are not mentioned above 030000 ocean disasters disasters resulting from the abnormal or drastic change of the ocean environment and occurring on the sea or coast 030100 strom surge a coastal flood caused by non-periodic abnormal rising of water over part of the sea that results from tropical cyclone, extratropical cyclone or cold front, causing damages to human lives and properties along the coast 030200 sea wave sea waves with wave height of more than 4 meters, causing damages to ships, offshore oil drilling facilities, fishery, aquaculture, harbors and ports, seawalls, or other ocean and coastal engineering 030300 sea ice it blocks channels and causes damages to ships, offshore facilities and coastal engineering (continued) tsunami sea waves with wavelength up to hundreds of kilometers, induced by seafloor earthquakes, volcanic eruptions, underwater landslide, and subsidence, producing a sudden upward displacement of seawater and forming a "water wall" on the coast, devouring farmlands and villages, causing damages to human lives and properties 030500 red tide a sudden increase or high concentration of aquatic planktons and microorganisms changing the water body color to red or brown. it disrupts the normal aquatic ecology and causes damages to human lives and properties and eco-environment. see also the red tide disaster in the biological disasters 039900 other ocean disasters ocean disasters that are not mentioned above 040000 biological disasters natural disasters in the forest or grassland resulting from activities of living being, lightning, or spontaneous combustion, causing damages to crops, woods, cultivated animals and related facilities 040100 plant diseases and pests an outbreak of pathogenic microorganisms and pests, harming the farming and forestry 040200 pandemic disease an epidemic of infectious disease caused by microorganisms or parasites that rapidly spreads through human or animal populations, usually resulting in a high morbidity or mortality. it causes great damages to animal husbandry and harms to human health and life safety 040300 rodents an outbreak of rodent-related disasters, causing damages to plantation, animal husbandry, forestry and properties 040400 weeds weeds cause severe damages to plantation, animal husbandry, forestry, and human health 040500 red tide a sudden increase or high concentration of aquatic planktons and microorganisms changing the water body color to red or brown. it disrupts the normal aquatic ecology and causes damages to human lives and properties and eco-environment 040600 forest/grassland fire a fire in a forest or grassland caused by lightning, spontaneous combustion or human beings under combustible conditions. it causes damages to human lives and properties and eco-environment 049900 other biological disasters biological disasters that are not mentioned above 050000 natural disasters induced by the damage to ecosystem or ecological imbalance, bringing out negative impacts on the harmony between human beings and nature and on the living environment of human beings (continued) from the comparison between the classification of the twelfth five-year special plan and that of the state standards, it can be seen that they share the same five big groups of natural disasters, but the latter one has 15 more specific types than the former one. besides, an emergency incident is defined as "a natural disaster, accidental disaster, public health incident or social safety incident, which takes place by accident, has caused or might cause serious social damage and needs the adoption of emergency response measures" in emergency response law of the people's republic of china (2007). there is no universal standard for the classification of disaster scale. although there are different standards in different fields, the major factor considered is the scale of the hazardous event-induced disasters. generally, the classification indicators include the number of casualties, the amount of property loss, disaster-affected area, and hazard intensity. (1) indicator system of unisdr in the sendai framework for disaster risk reduction 2015-2030, there are seven disaster reduction indicators, four of which are related to the measuring of disasters, namely disaster mortality, disaster-affected people, direct disaster mortality. number of people killed or missing from a hazardous event. the death toll refers to the number of death population during or after the event, while the missing toll only refers to the total number of missing people during the event. besides counting the total number of dead and missing people, it is also important to calculate the percentage of killed and missing people per 100,000 people. thus, the effect of population base can be eliminated in temporal and spatial comparison of mortality. affected people. it refers to the total population that are affected directly or indirectly by disasters. directly affected people are those whose health was affected, such as injured and sick people, and those evacuated, displaced or relocated, and those who suffered from the disaster-induced direct damages to livelihoods, infrastructure, social culture, environment, and properties. at the same time, disaster statistics also need to include people whose houses were destroyed or collapsed and people who receive food aid. indirectly affected population are those suffered from the additive effects of disasters, namely people affected by disaster-induced disruption or modification of economy, critical facilities, basic services, business, work, society, and health. in practice, due to the difficulty in counting indirectly affected population, only directly affected population are included in the disaster statistics. likewise, it is also worth calculating the percentage of affected people per 100,000 people. in addition to counting the killed and missing people and affected people, it is also common to specify their ages, genders, residence addresses, and disabilities. direct economic loss. direct economic loss refers to disaster-induced loss of materials or properties, such as houses, factories, and infrastructures. usually after the occurrence of a disaster, it is advised to assess the property loss as soon as possible to facilitate the cost estimation for disaster recovery and insurance claims processing. it is also recommended to calculate the percentage of direct economic loss accounting for the global or national gross domestic product (gdp). direct economic loss can be further divided into agriculture loss, loss of industrial and commercial facilities, houses, critical infrastructure damaged or destroyed by disasters. direct agriculture loss: it refers to crop and livestock losses and also includes the losses of poultry, fishery, and forestry. industrial facilities damaged or destroyed: it refers to the loss of manufacturing and industrial facilities damaged or destroyed by hazardous events. commercial facilities damaged or destroyed: it refers to the loss of commercial facilities (including storage, warehouse, cargo terminal, etc.) that are damaged or destroyed by hazardous events. houses damaged: it refers to the loss of houses slightly affected by hazardous events and subject to no structural or architectural damages. after repair or cleanup, these damaged houses can still be habitable. houses destroyed: it refers to the loss of houses that collapsed or were burnt, washed away, and severely damaged and are no longer suitable for long-term habitation. critical infrastructure damaged or destroyed: it refers to the loss of educational and health facilities, and roads damaged or destroyed by hazardous events. educational facilities damaged or destroyed: it refers to the number of educational facilities damaged or destroyed by hazardous events. educational facilities include children's playroom, kindergarten, elementary school, high school (junior and senior), vocational school, college, university, training center, adult education school, military school, and prison school. health facilities damaged or destroyed: it refers to the number of health facilities damaged or destroyed by hazardous events. health facilities include health centers, clinics, local or regional hospitals, outpatient centers, and facilities that provide basic health services. roads damaged or destroyed: it refers to the length of road networks in kilometers that are damaged or destroyed by hazardous events. infrastructure damaged or destroyed: it refers to the loss of infrastructures other than the critical infrastructures, such as railways, ports, airports. railways damaged or destroyed: it refers to the length of railway networks in kilometers that are damaged or destroyed by hazardous events. ports damaged or destroyed: it refers to the number of ports that are damaged or destroyed by hazardous events. airports damaged or destroyed: it refers to the number of airports that are damaged or destroyed by hazardous events. basic services. basic services refers to the disruption of public services or time loss due to low-quality services, which are caused by hazardous events. basic services include health facilities, educational facilities, transportation system (including train and bus terminals), ict system, water supply, solid waste management, power supply system, emergency responses, etc. the health facilities, educational facilities, transportation system are mentioned above in the critical infrastructure loss and infrastructure loss sections. ict system refers to communications and the associated equipment network, including radio and tv stations, post offices, public information offices, internet, landline and mobile telephones. water supply includes drinking water supply and sewerage systems. drinking water supply system includes drainage system, water processing facilities, water transporting channels (channels and aqueducts) and canals, water tank, or tower. sewerage system includes public sanitary facilities, sewerage treatment system, collection and treatment of solid wastes from public sanitation. solid waste management refers to collection and treatment of solid wastes that are not from public sanitation. power/energy system includes power facilities, electrical substations, power control centers, and other power services. emergency response includes disaster management offices, fire departments, police stations, military, and emergency control centers. (2) indicator system of statistical system of damages and losses of large-scale natural disasters in china the ministry of civil affairs and national bureau of statistics of china jointly introduced the regulation statistical system of damages and losses of large-scale natural disasters in 2013, which brought the comprehensive assessment of natural disaster loss into the regulation system (shi and yuan 2014) . this statistical system explains the purpose and meaning of statistics of large-scale disasters and defines the statistical scope and major indicators. other contents described in this regulation include the submission procedure, forms of organization and data collection, 26 loss statistical report forms (1 of which is the loss summary table), 1 basic report, and 738 indicators. some examples of these indicators are affected people, houses damaged and destroyed, household property loss, agriculture loss, industry loss, service loss, infrastructure loss, loss of public service system, resources and environmental loss, and so on (table 1.8) . figure 1 .5 shows the changes in the percentage of direct economic loss accounting for gdp and human mortality caused by disasters in china (1990 china ( -2012 which wenchuan earthquake data are not included). the overall decreasing trends of the two items demonstrate a good result of comprehensive disaster reduction. compared to the disaster indicators in sendai framework for disaster risk reduction 2015-2030 that incorporates both the human-made and natural disasters, the statistical system can only be applied to natural disasters. in contrast to the emphasis of the latter one on the comprehensiveness, the former one only highlights the key points. another difference between these two is that the latter one includes report of rural residential houses damaged and destroyed b02 report of urban residential houses damaged and destroyed b03 report of non-residential houses damaged and destroyed c01 report of household property loss d01 report of agriculture loss e01 report of industry loss f01 report of service loss g01 report of infrastructure (transportation) loss g02 report of infrastructure (communications) loss g03 report of infrastructure (energy) loss g04 report of infrastructure (water conservancy) loss amount of materials damaged and destroyed g05 report of infrastructure (municipal service) loss economic loss g06 report of infrastructure (living facilities in rural area) loss g07 report of infrastructure (geological hazard prevention) loss h01 report of public service (educational system) loss h02 report of public service (technology system) loss h03 report of public service (health system) loss h04 report of public service (culture system) loss h05 report of public service (media system) loss h06 report of public service (sports system) loss h07 report of public service (social security and service system) loss h08 report of public service (social management system) loss h09 report of public service (cultural heritage system) loss report of resources and environmental loss amount of materials damaged and destroyed j01 report of basic indicators the resources and environmental damages caused by natural disasters, while the former one stresses on the effectiveness and quality losses of infrastructure and services caused by disasters. therefore, there are similarities in the disaster indicators of these two regulations, and there are also differences due to social and cultural differences. even though some indicators share the same name in two systems, the actual meanings might be different. in practice, people need to be cautious in choosing the right indicator(s). at present, the classification of disaster grade mainly adopts the standardized division method of disaster risk factors of each disaster, while there is no standard division for multi-hazard classification. the qualitative approach is usually used to classify disaster intensity levels, that is, the use of continuous quantitative or semiquantitative indicators, such as applied multi-risk mapping of natural hazards for impact assessment (armonia) that categorizes a disaster into high, medium, or low level according to its intensity. another example is hazard score proposed by odeh engineers inc. (2001) that takes into account the level, frequency, and percentage of the affected area in the total research area. a higher score means the hazard has a higher intensity. the world natural disaster hotspots identified by the world bank are based on 2.5°â 2.5°grid cells for risk assessment. in each grid cell, the hazard indexes of all types of hazards occurred are summed to give a score for the determination of hotspots. the hazard index of each type of hazard is established according to the corresponding data. the term very large-scale disaster emerged in the beginning of the twenty-first century. at the end of the twentieth century, a series of disasters happened worldwide and caused great impacts on the human society and economy. for example, hurricane andrew occurred in usa in 1992 claimed 65 lives and caused 26-billion-dollar definition of very large-scale disaster. the chinese word "juzai" appeared in 1986 in china for the first time and was used to mistranslate the word "catastrophic disaster" in the western literatures. the appearance of "juzai" in chinese media and academia is closely related to the founding and explanation of the catastrophic disaster insurance funds. according to the statistical data from cnki.com.cn, as of the end of december 2011, there were up to 1359 literatures that include "juzai" in the titles. and the number of papers increased annually with the peak of 504 publications in the year 2008. more than half of these 504 papers are related to the catastrophic disaster insurance. due to the frequent occurrences of very large-scale disasters in recent years, the new words such as "juzai prevention," "juzai relief," and "juzai assessment" are becoming more and more widely used in scientific publications. in the chinese academic literatures, it is the author of this book that first introduced the word "dazai" for "large-scale disaster" and "juzai" for "very large-scale disaster" in the western literatures after attending the high level advisory board seminar of financial management of large-scale catastrophes held by oecd in paris in the july 2006 (shi et al. 2006 (shi et al. , 2007 . although a lot of work has been done in the definition and classification of very large-scale disasters, there are no well-recognized definition and classification standards of very large-scale disasters in the fields of academia or finance. different scholars have their own angles. in western literatures, the following definitions have great influences. in the book large-scale disasters-lessons learned published by oecd in 2004, the terms large-scale disasters (or megadisasters) and very large-scale disasters were used, but the specific quantitative criterion was not provided. in oecd's opinion, very large-scale disasters can cause a great number of casualties, property losses, and widespread infrastructure damage. the impacts are so great that governments of the affected area and neighboring regions become unable to cope with; even public panic occurs. oecd also emphasizes the importance of cooperation and assistance among the member countries in response to the very large-scale disasters (oecd 2004) . in the book large-scale disaster-prediction, control, reduction by mohamed gad-el-hak (2008), disasters are divided into large-scale and very large-scale disasters based upon the disaster scope and death toll ( fig. 1.6) . a very large-scale disaster is defined as a disaster with the death toll more than 10,000 or the affected area over 1000 km 2 . the definition of catastrophic disaster is usually based on the scale of the insured property losses by experts on insurance and financial management and development. the insurance services office (iso) of usa defines a catastrophic disaster as an event that causes insured property losses of 25 million dollars or more and affects a significant number of property/casualty policyholders and insurers. swiss re uses losses more than 38.7 million us dollars as a standard. from the amount of property losses, it can be seen that the scale of a catastrophic disaster cannot reach that of a large-scale disaster or megadisaster, let alone a very large-scale disaster. this also shows that the term "juzai" mentioned in the chinese literatures in the late 1980s has a scale of the catastrophic disaster and was only paid attention to by experts on insurance and financial management and development. therefore, before the use of large-scale disasters or megadisasters and very large-scale disasters in the western literatures in the early twenty-first century, the term "juzai" in the chinese literatures only refers to a catastrophic disaster. from the angle of geoscientists, very large-scale disasters are usually defined according to the hazard intensity, casualties, property losses, and affected scope. a very large-scale disaster in ma's opinion must reach two of the following criteria: over 10,000 deaths, direct economic losses of more than 10 billion chinese yuan of 1990, economic losses of more than the average annual fiscal revenue of the previous three years of a chinese province, drought disaster rate more than 70%, or flood disaster rate more than 70%, crop losses of more than 36% of the average annual crop production of the previous three years of a chinese province, more than 300,000 houses collapsed, and livestock death toll of more than 1 million (ma et al. 1994 ). shi et al. defines a very large-scale disaster as a great disaster caused by a 100-year hazard (e.g., a 7.0-magnitude or stronger earthquake) and resulting in a great number of casualties and large and widespread property losses (shi et al. 2010) . also in shi's definition, the impacts of a very large-scale disaster are so great that the affected area is unable to respond by itself and has to resort to outside help (table 1 .9). according to the classification standard in table 1 .9, the very large-scale disasters caused by natural hazards worldwide between 1990 and 2015 are listed in table 1 .10. from table 1 .10, we can see that one of the characteristics of the very large-scale disasters is the big hazard intensity. a very large-scale disaster can be a disaster chain composed of a very large hazard and its induced secondary disasters. it can also be a superposition of multiple types of disasters that are triggered by multiple hazards in a specific region and during a specific period of time. besides, very large-scale disasters usually cause a great number of deaths and injuries, a huge amount of property losses, severe impacts on economy, society and natural large-scale 6.5-7.0 (earthquake) or 1/50a-1/100a 1000-9999 10.0-99.9 10,000-99,999 medium-scale 6.0-6.5 (earthquake) or 1/10a-1/50a 100-999 1.0-9.9 1000-9999 small-scale below 6.0 (earthquake) or below 1/10a 99 1.0 1000 note (1) for each disaster level, at least two of the four criteria must be met. (2) death toll includes both the people dead and people missing over 1 month. (3) direct economic loss is the value of actual disaster-caused property loss of the year. (4) the affected area is the area where there are casualties, property losses, or damages to ecosystems approx. 14 environment, and a large disaster area. the emergency aids and reconstruction when or after the occurrence of the very large-scale disasters usually need help from a larger region or the whole country. in some cases, even international aids are indispensable. all the very large-scale disasters mentioned so far are caused by sudden hazards. the indicators and classification standards for disasters caused by the accumulation of gradual hazards should be different (zhang et al. 2013) . however, there are few discussions about the classification standards of gradually generated very large-scale disasters. drought is one of the major natural disasters in both china and the world. since 1949, a number of severe droughts causing great number of casualties and huge property losses have happened in china. for example, more than tens of thousands of people were killed due to the three-year great drought from 1959 to 1961. based on the case of drought, we will discuss the classification standard of gradual very large-scale disasters below. we cannot use hazard intensity to measure or to classify very large-scale droughts. this is because the forming process of a drought is very complicated. a drought hazard could be meteorological, or hydrological. it can also be soil drought or socioeconomic drought. the indicators and measurement criteria vary among different types of droughts. the data and studying methods are also different. what's more, there is no linear relationship between the drought intensity and drought losses. and there is no definite relationship between the drought hazards and the formation of drought disasters neither. the impacts of a very large-scale drought disaster can be represented in crop losses and population in need of aids. drought could result in a bad harvest or total crop failure and water shortage for both human beings and livestock. industrial production, urban water supply, and ecological environment could also be affected to varying degrees if a drought lasts for a long time. in the statistical system of damages and losses of natural disasters by prc ministry of civil affairs (2013), the following items are included in the statistics of droughts: affected population, population affected by water shortage, number of livestock affected by water shortage, affected crop area, crop disaster area, total crop failure area, affected grassland area, and population in need of food and water aids. the inclusion of population affected by water shortage and population in need of aids in this statistical system demonstrates the "people-oriented" disaster relief philosophy. in the state-level contingency plan for natural disaster relief by general office of the state council of prc, it is mentioned that when the number of people in need of food and water aids from governments accounts for a certain percentage of the agricultural population or reaches a designated magnitude, the state will initiate emergency response of the corresponding level (table 1 .11). based on the severe droughts in china in table 1 .11, five criteria are used to define very large-scale drought disaster, crop disaster ratio, crop disaster area, disaster population, population in need of aids ratio, and direct economic loss (table 1.12). (2) indicator explanation. affected crop area is the crop area that has a reduction of more than 10% of production. crop disaster area is the crop area that has a reduction of more than 30% of production. crop disaster ratio is the ratio of crop disaster area over affected crop area. population affected is the number of people who suffer from losses caused by natural disasters (including non-permanent residents). disaster population is the population that are affected by the crop disaster. in this table, it is estimated from the crop disaster area and the cultivated area per capita in the disaster province. population in need of aids is the number of people who are directly affected by natural disasters and are in need of food and water supply or medical treatment from the government (including non-permanent residents). population in need of aids ratio is the ratio of population in need of aids to the population affected. direct economic loss is the value of depreciated of the disaster-bearing bodies or the value of the disaster-bearing bodies forfeited. in this table, it is the value of actual property damages of the year when the disaster happened. (3) the disaster population in events 1-11 was estimated from the crop disaster area and the cultivated area per capita in the disaster province. forum are fiscal crises in key economies, structurally high unemployment/ underemployment, water crises, severe income disparity, failure of climate change mitigation and adaptation, greater incidence of extreme weather events (e.g., floods, storms, fires), global governance failure, food crises, failure of a major financial mechanism/institution, and profound political and social instability. it can be seen from the above that, besides the continuing focus on the traditional risks, we need to accelerate the study of response to a series of non-traditional risks. the five categories of risks were not changed in the global risk report 2016, but the number of specific risks was decreased from 31 to 29 (global risk 2016). in this report, a global risk is an uncertain event or condition, if occurring, which can cause significant negative impact for several countries or industries within the next 10 years. a global trend is a long-term pattern that is currently taking place and that could contribute to amplifying global risks and/or altering the relationship between them (table 1 .14). the global risks landscape 2016 was proposed in the global risk report 2016. from the landscape, it can be seen that the risks with the highest impact and likelihood are failure of climate change mitigation and adaptation, water crises, large-scale involuntary migration, fiscal crises, interstate conflict, profound social instability, cyber attacks, and unemployment or underemployment (global risk 2016). in the global risks interconnections map 2016, the most strongly connected risks are failure of climate change mitigation and adaptation, profound social instability, large-scale involuntary migration, and unemployment or underemployment (global risk 2016). the davos world economic forum reports involve a wide range of global risks covering the fields of economy, politics, culture, society, and ecology and could be corresponding to the economic development, political development, cultural development, social development, and ecological development proposed by the chinese government, respectively. thus, it can be seen that the risk classification of the world economic forum emphasizes the combination with practice. the risk taxonomy of irgc is from the perspective of hazards, similar to the disaster classification in sect. 1.2. this classification stresses on the causes of risks and thus lacks in combination with practice. however, it pays attention to emerging risks and slow-developing catastrophic risks, including the governance of very large-scale disaster risks. at the same time, it provides the framework for systematic risk assessment and governance. in china, the classification of risks is tightly associated with the security and disaster classifications. for example, the overall national security concept proposed by the chinese government is in a one-to-one correspondence with the global risks in the world economic forum report. in detail, the political security, homeland security, and military security correspond to geopolitical risks; economic and resource security to economic risks; cultural and societal security to societal risks; technology, information, and nuclear security to technological risks; ecological security to environmental risks (xi 2016) . another example is the four public securities proposed by the chinese government corresponding to five of the six risk categories of irgc, that is, natural disaster of the former corresponds to the natural forces of the latter, accidental disasters to physical risks, public health accidents to chemical and biological risks, and social security incidents to social-communicative hazards. the complex hazards are usually related to the four public securities proposed by the chinese government, and also to the integrated disasters. the classification system of risks is built upon the hazard and disaster classifications in china. for example, if hazards are divided into natural, man-made, and environmental ones, risks can be classified into the corresponding three types. in the same way, risks can also be divided into four categories of natural, accidental, public health, and social security ones based on the four-type classification of hazards. the natural disaster risk level is usually expressed in exceedance probability or return period, the same way as the intensity level of natural hazards. for example, the meteorological, hydrological, and ocean disaster risks can be divided into 10-year level (small-scale disaster), 20-year level (medium-scale disaster), 50-year level (large-scale disaster), and 100-year level (very large-scale disaster). the earthquake disaster risk level is usually expressed in earthquake magnitude. for example, a magnitude 7.0 or above earthquake poses a very large-scale disaster risk, 6.5-7.0 large-scale, 6.0-6.5 medium-scale, and 6.0 or below small-scale disaster risks. the natural disaster risk level does not only depend on the natural hazard intensity but also count on the vulnerability and exposure of the hazard-bearing bodies. in practice, the classification of natural disaster risk levels is even more complicated and thus usually resorts to the relative levels such as the first-level risk, the second-level risk, the third-level risk, the fourth-level risk, and the fifth-level risk. the larger the number is, the higher the risk level is. in the atlas of natural disaster risk of china by peijun shi (chinese-english bilingual version, shi 2011) and the world atlas of natural disaster risk by peijun shi and roger kasperson 1.3 risks (shi et al. 2015) , the temporal and spatial patterns of natural disaster risks of china and the world are displayed by using indicators including risks, risk grades, and risk levels (qin et al. 2015; shi 2011 shi , 2015 . it is more difficult to classify man-made and environmental risk levels by using quantitative criteria. a common way is to use relative level, or using the trends and changes of man-made and environmental risks to describe their levels. the global risk trends 2015 in the davos world economic forum risk report is an example of this kind of way of reflecting global risk levels. in detail, increasing global risk levels at 2015 are aging population, changing landscape of international governance, climate change, environmental degradation, growing middle class in emerging economies, increasing national sentiment, increasing polarization of societies, rise of chronic diseases, rise of cyber dependency, rising geographic mobility, rising incoming and wealth disparity, shifts in power, and urbanization (wef 2015) . the top three most likely global risks in 2016 in each region are reported in the global risk report 2016 of wef (wef 2015) . in north america, the top three are cyber attacks, extreme weather events, and data fraud or theft. in latin america and the caribbean, the top three are failure of national governance, profound social instability, and unemployment/underemployment. in europe, the three are large-scale involuntary migration, unemployment/underemployment, and fiscal crisis. in the middle east and north africa, they are water crises, unemployment/ underemployment, failure of national governance, and profound social instability. in sub-saharan africa, they are failure of national governance, unemployment/ underemployment, and failure of critical infrastructure. in central asia (including russia), they are energy price shock, interstate conflict, and failure of national governance. in east asia and the pacific, they are natural catastrophes, extreme weather events, and failure of national governance. in south asia, the top three are water crises, unemployment/underemployment, and extreme weather events. the exceedance probability mentioned previously, a concept usually used in the study of natural disaster risks, refers to the likelihood of the intensity or motion parameters of an earthquake, or the flood level, or the maximum wind speed at the center of a typhoon exceeding a designated value or values in a specific location and during a certain period of time. in other words, it is the probability of the required value exceeding the given value and can be mathematically expressed as where p exceed is the likelihood of the required value (u) of a data series exceeding the limit value (u limit ). for example, a set of data x (x 1 , x 2 ,…x n ) have n raw data points that are arranged from the lowest to the highest. the exceedance probability of data point x i is p ¼ n à i þ 1 n â 100% ð1:6þ the following takes the earthquake as an example for the calculation of exceedance probability. within t years, the probability of earthquake occurrence for n times p(n) in a region is p n ð þ ¼ f n ð þ ð1:7þ in the same way, within t years, the likelihood of no earthquake happening in this region is then, the likelihood of at least one earthquake within t years, or the exceedance probability, is the probability density is poisson distribution is widely used in earthquake studies. within t years, the probability p(n) of n earthquakes occurring in a region can be expressed in the poisson distribution form as below: p n ð þ ¼ e àu â vt n n! ð1:11þ then, within t years, the likelihood of no earthquake happening in this region is p 0 ð þ ¼ e àvt â vt 0 0! ¼ e àvt ð1:12þ so the likelihood of at least one earthquake happening or the exceedance probability within t years is f t ð þ ¼ 1 à p 0 ð þ ¼ 1 à e àvt ð1:13þ the corresponding probability density is :14þ the variable v mentioned above is the annually averaged occurrence probability of earthquake in a region, which has an inverse relationship with the return period t 0 : from here, we can see that the relationship between the return period t 0 and the exceedance probability f(t) can be expressed as based on the equation above, we can calculate the return periods of different exceedance probabilities for a period of time. for example, the exceedance probability of 63% for 50 years is equivalent to a 50-year disaster, 10% means a 474-year disaster, and 2-3% means a 1600-2500-year disaster. in summary, hazards are negative factors to human beings, and the temporal and spatial patterns of hazards can be studied by comparing with historical observed data. disasters are the impacts of hazards on human beings and can be measured in terms of losses and damages. risks are future hazard-induced disasters in a specific location. in short, disaster risk science is a discipline studying the mechanics, processes, and dynamics of the interactions among hazards, disasters, and risks, as well as disaster risk prevention and reduction. the relationships among hazards, disasters, and risks are shown in fig. 1.7 . greater incidence of man-made environmental catastrophes (e.g., oil spills failure of climate change mitigation and adaptation geopolitical 1. global governance failure 2. political collapse of a nation of geopolitical importance 3. increasing corruption 4. major escalation in organized crime and illicit trade 5. large-scale terrorist attacks 6. deployment of weapons of mass destruction 7. violent interstate conflict with regional consequences 8. escalation of economic and resource nationalization (continued) and sabotage • human violence (criminal activities) • humiliation, mobbing consumer products (chemical, physical, etc • technologies (physical, chemical • large constructions, like buildings, dams, highways, and bridges • critical infrastructures, in terms of physical, economic, social-organizational and communicative severe acute respiratory syndrome (sars) china international decadal commission for disaster reduction china's extreme weather events and disaster risk management and adaptation assessment report gb/t28921. 2012. natural disaster classification and code notice on the emergency plan for the issuance of natural disaster relief in china beijing: ministry of civil affairs, people's republic of china reviewing and visualizing the interactions of natural hazards regions of risk: a geographical introduction to disasters risk governance: towards an integrative approach. white paper no. 1, author o. renn with an annex by a research on regional distribution of major natural hazards in china remarks on national security the atlas of natural hazards in china large-scale disaster prediction, control, and mitigation notice on statistical regulations of natural disasters. ministry of civil affairs, people's republic of china notice of "twelfth five-year" special plan on disaster prevention and reduction large-scale disasters -lessons learned theory and practices on disaster science world atlas of natural disaster risk on integrated disaster risk governance: seeking for adaptive strategies for global change on the classification standards of catastrophe and catastrophe insurance: the perspective from wenchuan earthquake and southern freezing rain and snowstorm disaster, china [c]. international integrated disaster prevention and mitigation and sustainable development forum integrated governance of natural disaster risk integrated assessment of large scale natural disasters in china theory on disaster science and disaster dynamics china atlas of natural disaster risk integrated risk governance: ihdp comprehensive risk prevention science program and comprehensive catastrophe risk prevention research integrated risk governance: ihdp integrated scientific plan and integrated catastrophe risk governance research death toll exceeded 70000 in europe during the summer of the law on response to emergencies. bulletin of the standing committee of the national people's congress of the people's republic of china terminology on disaster risk reduction disaster risk reduction for sustainable development. guidelines for mainstreaming disaster risk assessment in development. a publication of the united nations' international for disaster reduction integrated research on disaster risk. peril classification and hazard glossary united nations office for disaster risk reduction study on definition and division criteria of a large-scale disaster: analysis of typical disasters in the world in recent years global risks report global risks china's major natural disasters and mitigation measures (overview introduction to natural disasters hunan people's press (in chinese) risk is the probability of disaster loss in a future period of time in a region, or the future disaster. essentially, risk is the probability of occurrence of a future hazardous event and its impacts (loss and/or damage). unisdr (2004) defines risk as the probability of harmful consequences resulting from interactions between natural or human-induced hazards and vulnerable conditions. two aspects that need special attention are the influence of social factors on risk and the estimation of hazard intensity and distribution.disaster risk usually refers to natural disaster or environmental risk that is associated with natural factors. the wide attention which disaster risk receives is related to the disaster (especially catastrophic disaster) insurance and the risk governance of emerging risks and very large-scale disasters.the international risk governance council, founded in 2003 in geneva, switzerland, paid high attention to the governance of emerging risk and slow-developing catastrophic risks and also established the transition from risk management to risk governance.in 2006, chinese national committee for the international human dimensions program on global environmental change (cnc-ihdp) proposed to ihdp to undertake the integrated risk governance (irg) research under the background of global environmental change. this international scientific program proposal was approved by the scientific committee of ihdp and launched in 2010 (shi et al. key: cord-103784-f8ac21m2 authors: campbell, c.; wang, t.; mcnaughton, a. l.; barnes, e.; matthews, p. c. title: risk factors for the development of hepatocellular carcinoma (hcc) in chronic hepatitis b virus (hbv) infection: a systematic review and meta-analysis date: 2020-08-24 journal: nan doi: 10.1101/2020.08.21.20179234 sha: doc_id: 103784 cord_uid: f8ac21m2 background: hepatocellular carcinoma (hcc) is one of the leading contributors to cancer mortality worldwide and is the largest cause of death in individuals with chronic hepatitis b virus (hbv) infection. it is not certain how the presence of other metabolic factors and comorbidities influences hcc risk in hbv. therefore we performed a systematic review and meta-analysis to seek evidence for significant associations. methods: medline, embase and web of science databases were searched from 1st january 2000 to 24th june 2020 for english studies investigating associations of metabolic factors and comorbidities with hcc risk in individuals with chronic hbv infection. we extracted data for meta-analysis and report pooled effect estimates from a fixed-effects model. pooled estimates from a random-effects model were also generated if significant heterogeneity was present. results: we identified 40 observational studies reporting on associations of diabetes mellitus, hypertension, dyslipiaemia and obesity with hcc risk. meta-analysis was possible for only diabetes mellitus due to the limited number of studies. diabetes mellitus was associated with >25% increase in hazards of hcc (fixed effects hazards ratio [hr] 1.26, 95% ci 1.20-1.32, random effects hr 1.36, 95% ci 1.23-1.49). this association was attenuated towards the null in sensitivity analysis restricted to studies adjusted for metformin use. conclusions: in adults with chronic hbv infection, diabetes mellitus is a significant risk factor for hcc, but further investigation of how antidiabetic drug use and glycaemic control influence this association is needed. enhanced screening of individuals with hbv and diabetes may be warranted. hepatitis b virus (hbv) is a hepatotropic virus responsible for substantial morbidity and mortality worldwide. infection can be acute or chronic, with most of the hbv disease burden attributable to chronic disease. the world health organisation (who) estimated a chronic hbv (chb) global prevalence of 257 million for 2015, with 887,000 hbv-attributable deaths reported in the same year (1), making hbv the second highest viral cause of daily deaths (with first being the agent of the global covid-19 pandemic, sars-cov-2) (2,3) of which the burden has increased in recent decades (4). most chb deaths are due to primary liver cancer and cirrhosis; these conditions were responsible for over 40% of all viral hepatitis-attributable deaths. a gbd study on the global hcc burden reported a 42% increase in incident cases of hcc attributable to chronic infection between 1990 and 2015 (5), among which chb infection was the largest contributor, responsible for more than 30% of incident cases in 2015 (5). multiple risk factors for hcc in chb-infected individuals have been established, including sex, age, cirrhosis, and co-infection with human immunodeficiency virus (hiv) or other hepatitis viruses (including hepatitis c and d). previous studies have investigated associations of comorbidities, such as diabetes mellitus (dm) (8-11) and hypertension (12, 13) , with risk of hcc in the general population, and the european association for the study of the liver (easl) recognises dm as a risk factor for hcc in chb (7). as the global prevalence of comorbidities such as dm (14) , renal disease (15) , hypertension (16) and coronary heart disease (chd) (17) continues to rise, these conditions are increasingly relevant to the development of hcc. various risk scores have been developed to predict hcc risk: the page-b risk score was developed to predict hcc risk in caucasian patients on antiviral treatment (18) , with hcc risk. therefore, we undertook a systematic review, aiming to summarise and critically appraise studies investigating associations of relevant comorbidities and metabolic factors with risk of hcc in chb-infected individuals. in june 2020 we systematically searched three databases (web of science, embase and medline) in accordance with prisma guidelines (28); search terms are listed in table s1 . we searched all databases from 1 st january 2000 until 24 th june 2020, without application of any restrictions for study design applied to search terms or results, but including only full-text human studies published in english. we combined and deduplicated search results from the three databases, prior to screening for eligibility. we excluded articles not investigating associations of comorbidities with risk of hcc and/or not restricted to chb-infected participants. we also searched reference lists of relevant systematic reviews/meta-analyses and studies identified for inclusion to identify additional studies for inclusion. search terms were constructed and agreed on by three authors (pm, tw and cc) and articles were screened and selected by one author (cc). investigated, number of participants, number of hcc cases, sex, age at baseline, risk ratio and covariates adjusted for. we carried out meta-analysis in r (version 3.5.1) using the "meta" package (version 4.12-0) (29), including only hazard ratios (hrs) minimally adjusting for age and sex. we calculated pooled summary effect estimates using the inverse-variance weighting of hrs on the natural logarithmic scale, and quantified between-study heterogeneity using the i 2 statistic; significance of heterogeneity was investigated using cochran's q test (p threshold=0.05). where i 2 was >0 and heterogeneity was significant, we present both fixed-and random-effects summary estimates. we undertook multiple sensitivity analyses whereby analyses were restricted to studies adjusting for various additional confounders and for dm treatment, and stratified by dm type, in order to investigate robustness of observed associations. for diabetes, we considered diagnoses of type 1 and type 2 diabetes, as well as unspecified diabetes mellitus, for pooling the effect, followed by further stratification by subtypes of diabetes if enough studies were eligible. hypertension (ht), was defined by either a diagnosis of ht recorded as part of the medical history or current health assessment, or a measurement with mean arterial pressure (map) above a specified threshold. obesity was based on bmi values, by referring the cut-off in the included studies, where 25, 27, 30 kg/m 2 were the common threshold values used. cvd was defined broadly as an umbrella term including any of the following disease subtypes: ischeamic heart disease (ihd)/coronary heart disease(chd), cerebrovascular disease. dyslipidaemia was defined according to serum lipid concentrations above a certain threshold (thresholds may vary depending on healthcare setting). exposure/outcome ascertainment. studies with scores of <5, 5-7 and >7 points were considered to be of low, sufficient and high quality, respectively. in total our search identified 1,814 articles (899 from medline, 407 from embase and 508 from web of science) (figure 1 ). after deduplication, we screened 1,136 individual articles by title/abstract, from which 140 full texts were identified for full-text assessment. after exclusion of ineligible articles and reference list searching of relevant articles, we identified 40 articles for inclusion in this review. summary characteristics of included studies are reported in table s2 . all studies were observational in design, with 33 cohort and 7 case-control studies included (table s2) . thirty-two studies were conducted in asian countries. four studies were restricted to male cohorts and 36 were undertaken in mixed-sex cohorts. all studies recruited participants from health centres, healthcare or prescription databases, or pre-existing cohorts or cancer screening programmes. all studies were undertaken in adults, with mean/median ages of cohorts ranging between 40 and 65 years in 33 studies. thirty-three studies investigated dm/insulin resistance/fasting serum glucose, 11 studies investigated blood hypertension/blood pressure, 7 investigated dyslipidaemia, 5 investigated obesity and cardiovascular disease. less than 5 studies investigated other factors including renal disease, statin use and use of antidiabetic drugs. in the 40 studies including 536,456 adults, >30,500 hcc events occurred (we are unable to report an exact number, because one study did not report a precise number of hcc cases (31)). sample sizes of cohort studies varied widely, ranging from 102 to among 40 studies, 39 had quality scores ≥5 (tables s3 and s4 ). all cohort studies were of sufficient quality with 13 of these being scored as high-quality. six case-control studies were of sufficient quality and one of poor quality. inclusion criteria varied widely and therefore study populations were heterogeneous. in most studies, exposures and outcomes were ascertained using health assessment, imaging or record linkage. twenty-three cohort studies and 7 case-control studies accounted for age and sex. hcc typically arise after long durations of infection, and therefore prolonged follow up allows for detection of more hcc events; among 23 cohort studies identified, only 5 cohort studies had lengths of follow-up ≥ 10 years). thirty-six studies investigated the association of dm with risk of chb progression to hcc, comprising 7 case-control studies (table 1a) and 29 cohort studies (table 1b) . four studies were restricted to males and the others included both sexes (table s2) . mean ages at baseline in all studies were ≥40 years, respectively. study populations were heterogeneous with variable inclusion criteria, and definitions of dm were not consistent between studies. four case-control and four cohort studies investigated type 2 dm/insulin resistance, three case-control and seven cohort studies investigated unspecified dm, and one case-control and three cohort studies investigated both type 1 and 2 dm as a composite potential risk factor. of the 7 case-control studies that reported effect estimates, there was directional inconsistency between effect estimates reported in case-control studies, with 4 studies reporting an increased risk of hcc in those with dm as compared to those without, 3 studies reporting a decreased risk of hcc in those with dm, and one study failing to provide an effect estimate. risk ratios (rrs) >1 ranged from 1.35 to 2.04, and all were statistically significant. rrs <1 ranged from 0.19 to 0.80, of which two were statistically significant. among 28 cohort studies providing effect estimates (27 hrs and 1 or), there was directional consistency with 27 of the reported rrs >1. effect sizes >1 ranged from 1.05 to 6.80, with 15 rrs being statistically significant. the single rr that was <1 was nonsignificant. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . most case-control studies adjusted for age, sex, hcv coinfection, hiv coinfection and cirrhosis. twenty cohort studies minimally adjusted for age and sex. of these, 15 adjusted for hcv coinfection, 13 for cirrhosis, 12 for antiviral treatment, 10 for hiv coinfection, 9 for alcohol consumption, 7 each for hbv viral dna load and cigarette smoking and 6 for other liver disease (including alcoholic liver disease). eight studies excluded participants who developed hcc within the first 3 to 12 months of follow-up in their main analyses. one study did so in sensitivity analysis and found this did not modify associations observed. dm was associated with an increased risk of progression to hcc by meta-analysis restricted to hrs minimally adjusted for age and sex ( figure 2 ). as there was significant heterogeneity (i 2 =49%, p<0.01), results from both fixed-and randomeffects analyses are presented. in random-effects analysis risk of hcc was 36% (summary rr 1.36; 95% ci 1.23-1.49) significantly higher in dm compared to non-dm. we performed sensitivity analyses in order to investigate the robustness of pooled estimates to additional adjustment for hcv or hiv coinfection, cirrhosis, and dm treatment. after restricting meta-analysis to 16 studies adjusting for hcv coinfection in addition to age and sex ( figure s1 ), pooled hrs did not change materially. considering 8 studies adjusting for hiv and antiviral treatment ( figure s2 ), pooled hr from the fixed-effects analysis was attenuated towards the null slightly but still remained significant. to investigate the robustness of the association of dm with hcc to adjustment for cirrhosis, a potential mediator, we restricted meta-analysis to studies adjusting for cirrhosis ( figure s3 ). this did not change pooled hrs materially. to investigate heterogeneity between type 2 dm and unspecified dm, sensitivity analysis was performed whereby studies were stratified by dm type. amongst studies investigating type 2 dm, heterogeneity was 33% (p=0.18) ( figure s4 ). however the the association of dm with hcc risk was attenuated towards the null in studies that adjusted for metformin use, with risk of hcc 16% higher in dm participants as compared to non-dm (random effects hr 1.16, 95% ci 1.04-1.29) in analysis restricted to studies adjusting for metformin use ( figure s5 ). after restricting to studies adjusting to dm treatment, pooled hrs remained statistically significant. eleven studies investigated the association of ht with risk of chb progression to hcc, one case-control study and 10 cohort studies ( table 2 ). all studies were mixedsex samples in which mean/median age at baseline was ≥40 years (table s2) . definitions of ht were heterogeneous; most studies ascertained hypertension via record linkage, but others used health assessment or interview. few studies defined clinical thresholds for hypertension classification. "higher" map was the primary exposure of interest in the case-control study, for which a threshold was not defined. among 10 studies reporting hazards of hcc associated with ht, only three identified significantly increased risks, with two unadjusted and one adjusted for age. another five studies reported an effect in the same direction, but effect sizes were not statistically significant. adjusted hrs >1 ranged from 1.19 to 1.70 and <1 from 0.04 to 0.96. adjustment for confounders was poor, with only four hrs minimally adjusted for age and sex. seven studies investigated the association of dyslipidaemia with hcc risk in chb patients (table 3) . all studies reported reduced risks of hcc in participants with dyslipiaemia as compared to those without, however only one hr was statistically significant. clinical definitions of dyslipidaemia were often not reported, and only four studies minimally adjusted for age and sex. six studies investigated the association of obesity with hcc risk. clinical definitions of obesity varied greatly, and out of four studies reporting increased risks of hcc with obesity, only one hr was statistically significant. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint three studies investigated the association of statin use with hcc risk in chb. all studies reported hrs <1, and two of these hrs were statistically significant. hrs reported in 5 studies for hcc risk associated with cvd varied, likely due to the variable definitions of cvd used across studies. associations for other variables, including respiratory disease and renal disease, were reported by <2 studies each. our meta-analysis suggests that dm is a risk factor for hcc in chb infected individuals, with hazards of hcc >20% higher in the presence of dm; however, we report significant between-study heterogeneity. this association did not materially change after restriction to studies adjusting for relevant confounders, but did suggest a favourable impact of dm treatment with metformin. pooled effect estimates remained significant in sensitivity analyses. few studies investigated other comorbidities, and some comorbidity search terms included in our systematic literature search returned few or no results. this highlights the need for future investigation of these comorbidities, as antiviral treatment cannot eliminate the risk of hcc entirely and therefore novel risk factors must be identified in order to inform interventions. although easl (7) and apasl (32) guidelines recognise this association, it is not currently consistently described in other recommendations (e.g. aasld guidelines (33) do not list dm as a risk factor for hcc). some studies investigating comorbidities and their metabolic risk factors reported significantly reduced hazards of participants with these conditions as compared to those without. this association may be confounded by the requirement for treatment in secondary care, whereby chb-infected individuals may be more likely to receive screening and antiviral treatment. findings from case-control and cohort studies were not consistent; whilst the majority of cohort studies reported increased risks of hcc associated with dm, case-control findings were inconsistent, and indeed three studies reported a significant reduction of hcc risks in association with dm. explanations for such findings including all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . confounding, selection bias associated with the study of hospital control groups that enrich for dm (34,35), and chance, especially in small studies (34-37). our findings are consistent with a previous meta-analysis (38); we provide a comprehensive review of all cohort studies and include a larger number of studies. we restricted to studies reporting hrs minimally adjusted for age and sex. however, adjustment for covariates and inclusion criteria varied considerably between studies, and this may explain some of the between-study heterogeneity. substantial heterogeneity remained in sensitivity analyses restricted to studies adjusting for additional key confounders, as adjustment for confounders was variable within these studies and populations may not have been comparable. although baseline age and sex characteristics were comparable across studies, there was variability regarding exclusion of those with additional comorbidities and those on antiviral treatment. we noted variable definitions of dm, with some studies restricting investigation to type 2 dm whereas others included participants with unspecified dm. risk factors for types 1 and 2 diabetes mellitus vary, and heterogeneity in dm definitions could therefore contribute to variable study populations and outcomes. global prevalence and incidence estimates for specific dm types do not exist, as distinguishing between types often requires expensive laboratory resources that are not available in many settings. however most cases of type 1 diabetes are found in europe and north america, and the large majority of studies included in this systematic review and meta-analysis were conducted in asian countries (39). it is possible that varied lengths of follow-up also contributed to between-study heterogeneity, although hrs did not significantly vary with length of follow-up in sensitivity analysis. this is because cancer is a chronic disease with a slow development, and preclinical disease can be present for many years before clinical manifestation; follow-up times <10 years may be insufficient to detect hcc outcomes. we were unable to provide effect estimates across most potential patient subgroups because the subgroups contained small numbers of studies, putting subgroup analyses at greater risk of chance findings as well as being subject to the influence of multiple testing. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint the association we report in this meta-analysis is weaker than those observed in patients with chronic hcv infection. in previous studies of individuals with chronic hcv infection, risk of hcc was elevated ~2-fold in the presence of dm (40,41). previous studies also report increased risks of dm in hcv-infected individuals as compared to non-infected individuals (76) (77) (78) (79) . however, this is likely due to the various extrahepatic manifestations of hcv which are not present in hbv infection. in sensitivity analysis restricted to studies adjusting for cirrhosis, the observed association of dm with hcc was attenuated towards the null. this may be explained by a confounding of the association by cirrhosis, accounted for by an independent association of cirrhosis with both dm and hcc, and the absence of cirrhosis from the causal pathway that associates dm with hcc. however, if cirrhosis is located along this causal pathway, then it can be characterised as a mediator rather than a confounder. if cirrhosis is a mediator then adjusting for it would be incorrect. past studies support a positive association of dm with hcc risk in non-chb patients three studies adjusted for metformin use (53-55), and in sensitivity analysis restricted to these studies, the association between dm and hcc remained significant but was attenuated towards the null. it is not known the extent to which this is a result of glucoregulation by metformin, accomplished by inhibition of hepatic gluconeogenesis and improvement of insulin sensitivity in tissues leading to reduced oxidative stress in all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint the liver (56), and/or a direct impact of metformin in reducing cancer risk via regulation of cellular signalling. evidence from observational studies (57-59) and randomised controlled trials (rcts) (60) supports a protective effect of metformin against the development and progression of cancer in diabetic individuals. there is also some rct evidence for protective effects of metformin against progression of certain cancer types in non-diabetic individuals (61) although this is not consistent. multiple largescale phase iii rcts are currently underway (62-65) and will provide further information regarding the roles of dm and metformin in cancer development. we included all studies investigating the association of comorbidities with risk of chb progression to hcc that minimally adjusted for age and sex in order to provide a comprehensive review of available evidence. however, few studies investigated non-dm comorbidities, preventing meta-analysis for these comorbidities. additionally we were unable to restrict our meta-analysis of dm and hcc to studies adjusting for confounders other than age and sex, as few studies minimally adjusted for all relevant factors. publication bias may influence the outcome, as we restricted our search to peer-reviewed literature, and studies that do not report an association of dm with hcc may be less likely to be published. our results may not be generalisable to the global chb population, as there were a limited number of studies from non-asian countries. the lack of studies from any african countries is of concern, given that the region carries both the highest hbv prevalence (3) and largest mortality burdens for cirrhosis and hcc (66,67). our finding that dm is a risk factor for hcc in chb-infected individuals suggests that enhanced cancer surveillance may be justified in patients with chb and dm to enable early detection and treatment. improvements in guidelines could help to inform more consistent approaches to risk reduction. after adjustment for metformin use, this association remained significant, but was attenuated suggesting a potential benefit of metformin that warrants further study. ongoing investigation is required in order to identify and characterise risk factors for hcc, to extend these analyses to diverse global populations, and to elucidate disease mechanisms in order to inform prevention, screening and therapeutic intervention. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . stanaway preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint assessing the quality of nonrandomised studies in meta-analyses [internet] . [cited 2020 jun 17] . available from: http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint tables table 1a . effect estimates for case-control studies investigating the association of diabetes mellitus with hepatocellular carcinoma risk. ahr, adjusted hazards ratio; uhr, unadjusted hazards ratio; bmi, body mass index; cvd, cardiovascular disease; ihd, ischaemic heart disease; acs, acute coronary syndrome; nafld, non-alcoholic fatty liver disease; copd, chronic obstructive pulmonary disease. † adjusted risk ratios are minimally adjusted for age and sex. ‡ defined specifically as hyperlipidaemia. § defined specifically as hypertriglyceridaemia. ¶ defined specifically as hypercholesteraemia. † † adjusted for age but not sex. † † † metabolic risk factors (obesity, diabetes, hypertriglyceridemia and ht), with exposure groups split into groups of 0, 1, 2 and ≥3 risk factors. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint hr, hazard ratio; ci, confidence interval; dm, diabetes mellitus. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august 24, 2020. . https://doi.org/10.1101/2020.08.21.20179234 doi: medrxiv preprint association of diabetes duration and diabetes treatment with the risk of hepatocellular carcinoma diabetes increases the risk of hepatocellular carcinoma in the united states: a population based case control study metabolic syndrome and hepatocellular carcinoma risk metabolic risk factors and primary liver cancer in a prospective study of 578,700 adults global, regional, and national burden of chronic kidney disease, 1990-2017: a systematic analysis for the global burden of disease study global burden of hypertension and systolic blood pressure of at least 110 to 115mmhg global burden of cvd: focus on secondary prevention of cardiovascular disease the effects of metformin on the survival of colorectal cancer patients with diabetes mellitus metformin and reduced risk of cancer in diabetic patients metformin associated with lower cancer mortality in type 2 diabetes: zodiac-16. diabetes care metformin for chemoprevention of metachronous colorectal adenoma or polyps in post-polypectomy patients without diabetes: a multicentre double-blind, placebo-controlled, randomised phase 3 trial neoadjuvant chemotherapy with or without metformin in early breast cancer. -full text view -clinicaltrials.gov [internet the metformin active surveillance trial (mast) study -full text view -clinicaltrials.gov [internet a phase iii randomized trial of metformin vs placebo in early stage breast view of hepatocellular carcinoma: trends, risk, prevention and management the global, regional, and national burden of cirrhosis by cause in 195 countries and territories, 1990-2017: a systematic analysis for the global burden of disease study diabetes poses a higher risk of hepatocellular carcinoma and mortality in patients with chronic hepatitis b: a population-based cohort study diabetes mellitus is a risk factor for hepatocellular carcinoma in patients with chronic hepatitis b virus infection in china association between hepatocellular carcinoma and type 2 diabetes mellitus in chinese hepatitis b virus cirrhosis patients: a case-control study statin use and the risk of hepatocellular carcinoma in patients with chronic hepatitis b real-world effectiveness from the asia pacific rim liver consortium for hbv risk score for the prediction of hepatocellular carcinoma in chronic hepatitis b patients treated with oral antiviral therapy -pubmed radiologic nonalcoholic fatty liver disease increases the risk of hepatocellular carcinoma in patients with suppressed chronic hepatitis b the influence of metabolic syndrome on the risk of hepatocellular carcinoma in patients with chronic hepatitis b infection in mainland china stratification of hepatocellular carcinoma risk through modified fib-4 index in chronic hepatitis b patients on entecavir therapy effects of diabetes and glycemic control on risk of hepatocellular carcinoma after seroclearance of hepatitis b surface antigen hepatocellular carcinoma in the absence of cirrhosis in patients with chronic hepatitis b virus infection insulin resistance and the risk of hepatocellular carcinoma in chronic hepatitis b patients prognosis of patients with chronic hepatitis b in france (2008-2013): a nationwide, observational and hospitalbased study liver cirrhosis stages and the incidence of hepatocellular carcinoma in chronic hepatitis b patients receiving antiviral therapy the impact of pnpla3 (rs738409 c>g) polymorphisms on liver histology and long-term clinical outcome in chronic hepatitis b patients. liver int increased risk of hepatocellular carcinoma in chronic hepatitis b patients with new onset diabetes: a nationwide cohort study type 2 diabetes: a risk factor for liver mortality and complications in hepatitis b cirrhosis patients determinants virological response to entecavir on the development of hepatocellular carcinoma in hepatitis b viral cirrhotic patients: comparison between compensated and decompensated cirrhosis diabetes mellitus, metabolic syndrome and obesity are not significant risk factors for hepatocellular carcinoma in an hbv-and hcv-endemic area of southern taiwan risk factors for hepatocellular carcinoma in a cohort infected with hepatitis b or c the impact of type 2 diabetes on the development of hepatocellular carcinoma in different viral hepatitis statuses metabolic factors and risk of hepatocellular carcinoma by chronic hepatitis b/c infection: a follow-up study in taiwan body-mass index and progression of hepatitis b: a population-based cohort study in men type 2 diabetes and hepatocellular carcinoma: a cohort study in high prevalence area of hepatitis virus infection obesity and hepatocellular carcinoma in patients receiving entecavir for chronic hepatitis b thiazolidinediones reduce the risk of hepatocellular carcinoma and hepatic events in diabetic patients with chronic hepatitis b influence of metabolic risk factors on risk of hepatocellular carcinoma and liver-related death in men with chronic hepatitis b: a large cohort study adapting a clinical comorbidity index for use with icd-9-cm administrative databases all rights reserved. no reuse allowed without permission. perpetuity preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in key: cord-021959-1y67126b authors: madanoglu, melih title: state-of-the-art cost of capital in hospitality strategic management date: 2009-11-16 journal: handbook of hospitality strategic management doi: 10.1016/b978-0-08-045079-7.00006-5 sha: doc_id: 21959 cord_uid: 1y67126b nan making well-informed and effective capital investment decisions lies at the heart of any successful business organization. however, prior to investing in a project, an executive/manager should make three key estimates to ensure the viability of a business project: economic useful life of the asset, future cash flows that the project will generate, and the discount rate that properly accounts for the time value of the capital invested and compensates the investors for the risk they bear by investing in that project ( olsen et al. , 1998 ) . although the first two items are fairly challenging to estimate, the last one is even more challenging. in their book related to cost of capital, ogier et al. (2004) provided an excellent example which i would like to use to provide a practical introduction to this chapter. i take the liberty to modify the story in accordance with the needs of this chapter. imagine yourself at the edge of a river where your goal is to pass the river getting minimally wet in the least possible time. before making your move you need to turn to a local inhabitant who knows which stepping stones are safe, what the velocity and the viscosity of the water are, what the turning moments are, and what the probability of loose stones on the stream bed is. this situation is similar to the world of today's business investments. that is, executives need to make informed decisions about their investments and find out the minimum acceptable rate of return their shareholders expect as a compensation for the risks investors undertake. in addition, when an investment consists of both debt and equity, then the executives need to estimate the total cost of capital employed in this project to be able to pay their debt holders. this chapter intends to serve as a field guide or handbook of the cost of capital estimation for hospitality executives and practitioners. however, before getting into the practical aspects of cost of capital, some relevant concepts will be discussed from a theoretical perspective to better understand the background of this important topic. prior to getting into the core of the subject of estimating cost of capital, it is useful to define what risk is and describe the role it plays in investment decisions. in the hospitality field, risk is often defined as the variation in returns (probable outcomes) over the life of an investment project ( choi, 1999 ; olsen et al. , 1998 ) . the concept of risk is at the foundation of every firm as it seeks to compete in its business environment. financial theory states that shareholders face two types of risk: systematic and unsystematic. the examples of systematic risk could be changes in monetary and fiscal policies, the cost of energy, tax laws, and the demographics of the marketplace. finance scholars refer to the variability of a firm's stock returns that moves in unison with these macroeconomic influences as systematic, or stockholder, risk ( lubatkin and chatterjee, 1994 ) . stated differently, the level of a firm's systematic risk is determined by the degree of uncertainty associated with general economic forces and the responsiveness, or sensitivity, of a firm's returns to those forces ( helfat and teece, 1987 ) . in other words, these types of risk are external to the company and are outside of its control. however, a loss of a major customer as a result of its bankruptcy represents one source of unsystematic, or firmspecific risk (idiosyncratic or stakeholder risk). other sources of unsystematic risk include the death of a high-ranking executive, a fire at a production facility, and the sudden obsolescence of a critical product technology ( lubatkin and chatterjee, 1994 ) . unsystematic risk is a type of risk that can be eliminated by an individual investor by investing his/her funds in multiple companies ' stocks. the same rule may not be applied by company executives, since the success of a single project determines their tenure within their firms. the traditional financial theory looks at investment in securities from a portfolio perspective by assuming that investors are risk-averse and can eliminate the unsystematic risks (variance) associated with investing in any particular firm by holding a diversified portfolio of stocks ( markowitz, 1952 ( markowitz, , 1959 . markowitz pioneered the application of decision theory to investments by contending that portfolio optimization is characterized by a trade-off of the reward (expected return) of that individual security against portfolio risk. since the key aspect to that theory is the notion that a security's risk is the contribution to portfolio risk, rather than its own risk, it presumes that the only risks that matter to investors are those that are systematically associated with market-wide variance in returns ( lubatkin and schulze, 2003 ; rosenberg, 1981) . investors, it argues, should only be concerned about the impact that an alternative investment might have on the risk-return properties of their portfolio. however, the capital asset pricing model (capm) ( lintner, 1965 ; sharpe, 1964 ) (to be discussed in detail later) does not explicitly explain what criteria investors should use to select the alternative investments and how they should assess the risk features of these investments. moreover, the capm assumes that because investors can eliminate the risks they do not wish to bear, at relatively low costs to them, through diversification and other financial strategies, there is little need, therefore, for managers to engage in risk-management activities ( lubatkin and schulze, 2003 ) . in contrast, the field of strategic management is based on the premise that to gain competitive advantage, firms must make strategic, or hard-to-reverse, investments in competitive methods (portfolios of products and services) that create value for their shareholders, employees, and customers in ways that rivals will have difficulty imitating ( olsen et al ., 1998 ) . these investments enable the firms to protect their earnings from competitive pressure, and allow firms to increase the level of their future cash flow, while simultaneously reducing the uncertainty associated with them. the management of firmspecific risk lies at the heart of strategic management theories ( bettis, 1983 ; lubatkin and schulze, 2003 ) , and, from this perspective, management must work hard at avoiding investments that create additional levels of risk for the firm. bettis (1983) further affirms that the capm's emphasis on the equilibration of returns across firms (i.e., systematic risk) relegates to a secondary role strategy's central concern with managerial actions that seek to delay the calibration of returns (i.e., unsystematic risks). thus, the claim that systematic risk is paramount to the firm is undermined by the two arguable assumptions from portfolio theory: stockholders are fully diversified, and the capital markets operate without such imperfections as transaction costs and taxes. some stockholders, however, are not fully diversified, particularly the corporate managers, who have heavily invested, both financially and personally, in a single company ( vancil, 1987 ) . also, transaction costs, such as brokerage fees, act as a minor impediment, inhibiting other stockholders from completely eliminating unsystematic risk ( constantinides, 1986 ) . finally, taxes make all stockholders somewhat concerned with unsystematic risk (amit and wernerfelt, 1990; hayn, 1989 ) because interest on debt financing is tax deductible, thereby allowing firms to pass a portion of the cost of capital from their stockholders to the government. thus, firms can create value for their stockholders, within limits, by financing investments with debt rather than equity (kaplan, 1989; smith, 1990) . the limits are determined in part by the amount a firm is allowed to borrow and the terms of such debt, both of which are contingent upon the unsystematic variation in the firm's income streams. lubatkin and chatterjee (1994) contend that the debt markets favour firms with low unsystematic risk because they are less likely to default on their loans (this is particularly the case of the hospitality industry firms). in summary, the discussion of partially diversified stockholders, transaction costs, and leverage suggests that some stockholders may be concerned with unsystematic risk and factor it along with market risk to determine the value of a firm's stock (amit and wernerfelt, 1990; aron, 1988 ; lubatkin and schulze, 2003 ; marshall et al. , 1984 ) . cost of capital is defined as the rate of return a firm must earn on its investment projects in order to maintain its market value and continue attracting needed funds for its operations ( fields and kwansa, 1993 ; gitman, 1991 ) . consequently, a firm adds shareholder wealth when it undertakes the projects that generate a return higher than the cost of capital of the project. cost of capital is an anchor in firm valuation, project valuation, and capital investment decisions. cost of capital is generally referred to as weighted average cost of capital (wacc): where e is the market value of equity, d the market value of debt (and thus v ϭ e ϩ d ), t c the corporate tax rate, r e the cost of equity, and r d the cost of debt ( copeland et al. , 2000 ) . both of these items ( r d and r e ) are difficult to estimate and require some careful deliberations. the cost of debt is relatively simpler to calculate when a hypothetical firm issues bonds that are rated by the major bond-rating agencies such as standard & poor's and moody's. thus, these ratings may be used as a guide in computing the cost of debt. in addition, an investor may use the bond's yield to maturity or the rate of return that is in congruence with the rating of a bond. averaging the interest rates of long-term obligations of a firm is another method to calculate the cost of debt. the cost of debt estimation becomes difficult when a given firm has no bonds and no outstanding long-term debt. the cost of equity is difficult to estimate in its own right. first, cost of equity is generally estimated using historical data, which may be confounded by business cycles and abnormal • • • events affecting firm stock returns (e.g., fire in a hotel property) and industry returns (e.g., the terrorism events of 11 september 2001). second, although several methods were developed in the last 40 years, there is not one single method that produces consistent and reliable estimates. last, a hypothetical executive/ entrepreneur will face greater challenges as he/she needs to estimate the required rate of a single restaurant/hotel unit. the next section covers some of the common methods that are used by practitioners in the fields of financial and strategic management. cost of equity can be defined as the rate of return a firm must deliver to its shareholders who have foregone other investment opportunities and elected to invest in this particular company. however, cost of equity is a complex concept because firms do not promise paying a certain level of dividends and delivering a certain level of stock returns. thus, since there is no contractual agreement between the shareholders and the firm, the expected rate of return on invested equity is extremely challenging to estimate. fortunately, there are some models that can help us in tackling this challenging task. the next section will cover the major cost of equity models that gained prominence among practitioners and researchers in the last four decades. one of the early forward-looking methodologies is the dividend growth model (dgm) originally developed by gordon (1962) . it offers a very parsimonious method for estimating discount rate and thus accounts for risk. the dividend growth approach to cost of equity states that where, k e is the cost of common equity, dps the projected dividend per share, p the current market price per share, and g the projected dividend growth rate. the model assumes that over time, successful reinvestment of the value received by retained earnings will lead to growth and growing dividends. the approach suffers from oversimplification because firms vary greatly in their rate of dividend payout ( helfert, 2003 ) . this is due to the fact that common stockholders are the residual owners of all earnings not reserved for other obligations, and dividends paid are usually only a portion of the earnings accruing to common shares. the other major difficulty in applying this model lies in determining the specific dividend growth rate, which is based on future performance tempered by past experience. another key issue is that the model becomes unusable when a firm is not a dividend payer. the capm ( lintner, 1965 ; sharpe, 1964 ) is based on the assumption of a positive risk-return trade-off and asserts that the expected return of an asset is determined by three variables: β (a function of the stock's responsiveness to the overall movements in the market), the risk-free rate of return, and the expected market return ( fama and french, 1992 ) . the model assumes that investors are risk-averse and, when choosing among portfolios, they are only concerned about the mean and variance of their one-period investment return. this argument is, in essence, the cornerstone of the capm. the model can be stated as where, r m is the market return of stocks and securities, r f the risk-free rate, β the coefficient that measures the covariance of the risky asset with the market portfolio, and e ( r i ) the expected return of i stock. although the capm is touted for its relatively simple application, several other studies ( lakonishok and shapiro, 1986 ; reinganum, 1981 ) present evidence that the positive relationship between β and returns could not be demonstrated for the period of . particularly over the last two decades, even stronger evidence has been developed against the capm by fama and french (1992 , 1997 , and roll and ross (1994) . these researchers challenged the model by contending that it is difficult to find the right proxy for the market portfolio and that capm does not appear to accurately reflect the firm size in the cost of equity calculation, and that not all systematic risk factors are reflected in returns of the market portfolio. from the strategic management perspective, business executives face the following issues. implicit to the capm is the recommendation that managers should focus on managing their firm's overall market risk by focusing on β or the firm's • • • systematic risk and not be concerned with what strategists may focus on: firm-specific (unsystematic) risk. chatterjee et al. (1999) claim that herein lie two dilemmas: first, decreasing β requires managers to reduce investors ' exposure to macroeconomic uncertainties at a cost lower than what investors could transact on their own by diversifying their own portfolio; and second, to downplay the importance of firm-specific risk that not only is contrary to the strategic management field but also tempts corporate bankruptcy ( bettis, 1983 ) . therefore, an executive of a given company has to take into account the total risk of the project because, unlike investors holding stocks of multiple companies, the executive may not be able to diversify the risk of his/her company's investment by investing in multiple projects. another prominent cost of equity model is the arbitrage pricing theory (apt) developed by ross (1976) . the model states that actors other than β affect the systematic risk. the apt is based on the assumption that there are some major macroeconomic factors that influence security returns. the apt states that no matter how thoroughly investors diversify, they cannot avoid these factors. thus, investors will " price " these factors precisely because they are sources of risk that cannot be diversified away. that is, they will demand compensation in terms of expected return for holding securities exposed to these risks ( goetzmann, 1996 ) . although the model does not explicitly specify the risk factors, the apt depicts a world with many possible sources of risk and uncertainty, instead of seeking for equilibrium in which all investors hold the same portfolio. more formally, the apt is based on the assumption that there are some major macroeconomic factors that influence security returns. the apt states that no matter how thoroughly investors diversify, they cannot avoid these factors. thus, investors will " price " these factors precisely because they are the sources of risk that cannot be diversified away. that is, they will demand compensation in terms of expected return for holding securities exposed to these risks. just like the capm, this exposure is measured by a factor β ( goetzmann, 1996 ) . chen et al. (1986) managed to identify five macroeconomic factors that, in their view, explain the expected asset returns: the industrial production index, which is a measure of state of the economy based on the actual physical output; the shortterm interest rate, measured by the difference between the yield on treasury bills (tb) and the consumer price index (cpi); short-term inflation, measured by unexpected changes in cpi; long-term inflation, measured as the difference between the yield to maturity on long-and short-term u.s. government bonds; and default risk, measured by the difference between the yield to maturity on aaa-and baa-rated long-term corporate bonds (chen et al ., 1986; copeland et al. , 2000 ) . the apt describes a world in which investors behave intelligently by diversifying, but they may choose their own systematic profile of risk and return by selecting a portfolio with its own peculiar array of β s. the apt allows a world where occasional mispricings occur. investors constantly seek information about these mispricings and exploit them as they find them. in other words, the apt somewhat realistically reflects the world in which we live ( goetzmann, 1996 ) . although the apt provides the benefits explained above, these benefits come with some drawbacks. the apt demands that investors perceive the risk sources, and that they reasonably estimate factor sensitivities. in fact, even professionals and academics are yet to agree on the identity of the risk factors, and the more β s they have to estimate, the more statistical noise they have to put up with. last, this model does not offer much guidance to business executives as it focuses primarily on investors. one of the major proponents of the capm fama and french (1993) found that the relationship between average returns and β was flat and there was a strong size effect on stock returns. as a result, they developed a model that has gained popularity in recent years among the scholars and practitioners in the hospitality industry. the fama-french (ff) model is a multifactor model that argues that factors other than the movement of the market and the risk-free rate impact security prices. the ff is a multiple regression model that incorporates both size and financial distress in the regression equation. the ff model is typically stated as where β is the coefficient that measures the covariance of the risky asset with the market portfolio, r m the market return, r f , the risk-free rate, s the slope coefficient, and small minus big (smb) the difference between the returns on portfolios of small and big company stocks (below or above the nyse median), h the slope coefficient, and high minus low (hml) the difference between the returns on portfolios of high-and low-be/me (book equity/market equity) stocks (above and below the 0.7 and 0.3 fractiles of be/me) ( fama and french, 1993 ) . the size factor is denoted as smb premium where size is measured by market capitalization. smb is the average return on three small portfolios minus the average return on three big portfolios as described by fama and french (1993) . hml is the average return on two value portfolios minus the average return on two growth portfolios ( fama and french, 1993 ) . high be/me (value) stocks are associated with distress that produces persistently low earnings on book equity which result in low stock prices. in practice, the ff model shows that investors holding stocks of small capitalization companies and firms with high bookto-market value ratios ( annin, 1997 ) need to be compensated for the additional risk they are bearing. the size argument is supported by barad (2001) who reports that small stocks have outperformed their larger counterparts by an average of 5.4% over the last 75 years . however, fama and french (1993) find that the book-to-market factor (hml) produces an average premium of 0.40% per month ( t ϭ 2.91) for the 1963-1990 period, which, in the authors ' view, is large both in practical and statistical terms. the starting point for selecting the best method for the estimation of the cost of equity can be achieved by reviewing the relevant studies undertaken in the fields of hospitality and tourism. fields and kwansa (1993) conducted the first study that directly looked into the cost of equity and suggested the use of pureplay technique for estimating the cost of equity for the divisions of a diversified firm. later, several studies investigated how macroeconomic variables affect security returns in the hospitality industry (hotels and restaurants). the first study was conducted by barrows and naka (1994) . their study encompassed the 27-year period between 1965 and 1991 and employed five factors that were slightly different than the five factors of chen et al . (1986) . barrows and naka postulated that the return of the stocks is a function of the following five factors: where einf is the expected inflation, m1 the money supply, conn the domestic consumption, term the term structure of interest rates, and ip the industrial production. the results revealed that none of the macroeconomic factors was significant in explaining the variance of u.s. hotel stocks at 0.05 level and the factors accounted for the 7.8% of the variance in the lodging stocks. however, einf, m1, and conn had significant effect on the variation of the stock returns in the u.s. restaurant industry. in terms of the signs of the β coefficients einf had a negative whereas m1 and conn had a positive relationship with the restaurant stock returns . the postulated model explained 12% of the variance in the restaurant stocks. the authors cautioned that the results should be interpreted with care due to the small sample size of both restaurant and hotel portfolios, which were represented by five and three stocks, respectively. the second study was undertaken by chen et al. (2005) who used hotel stocks listed on taiwan stock exchange. the macroeconomic variables included in their study were ip, cpi, unemployment rate (uep), money supply (m2), 10-year government bond yield (lgb), and 3-month tb rate. these variables were used in the following way: cpi was utilized to estimate einf, and lgb, and tb were used for the computation of the yield spread (spd). based on the six time-series data the authors arrived at the common five macroeconomic variables which were predominantly used in the literature, namely, ip (change in ip), einf uep (change in unemployment rate), m2 (change in money supply), and spd (rate of the yield spread). these five variables explained merely 8% of the variation in hotel stock returns while only two of these variables were significant at the 0.05 level (m2 and uep). the regression coefficient of change in money supply had a positive relationship with hotel stock returns, whereas the relationship between change in uep and lodging returns was negative. in madanoglu and olsen (2005) proposed a conceptual framework that called for the inclusion of some of the intangible variables into the cost of equity estimation in the lodging industry. some of these variables were human capital, brand, technology, and safety and security. it is common knowledge that these variables were relevant for the lodging industry; however, there exists no time-series data to include them in the cost of equity estimations. publicly traded multinational lodging companies tend to differ on some key points regarding how assets are treated on their balance sheets. many of these companies do not actually own assets and produce their future cash flows from management contracts or franchise agreements. in many cases, they may also lease hotels or restaurants and the leases do not appear on their balance sheets. instead, these firms hold an equity position in a different company that holds these leases. therefore, it is almost unfeasible to properly assess the book value of the hospitality firms, which confounds the application of the ff model. sheel (1995) was the first researcher in the hospitality industry to point out that capm does not seem to meet the industry needs and called for further research into industry-specific factors. in the mainstream financial economics, downe (2000) argued that in a world of increasing returns, risk cannot be considered a function of only systematic factors, and thus β . he pointed out that the position of the firm in the industry, as well as the nature of the industry itself become a risk factor. thus, firms with a dominant position in the industry that succeed to adapt to the complexities of the business environment, will have a different risk profile than their competitors. this argument is particularly well fitting in the context of the hospitality industry where companies such as mcdonald's and marriott may demonstrate a different risk profile based on their market share in their segments. as for ff factors, professionals in the lodging industry are sceptical about such measures as the book-to-market value ratio (hml). some hospitality industry experts argue that hml is an inappropriate measure for the industry and attribute it to the fact that the difference between the firms whose value is captured by the assets they own and the firms whose value is derived from their intangible assets is not as distinct as in some manufacturing firms. while jagannathan and wang's study (1996) added a human capital variable to their cost of equity capital model, it measured human capital effects from the macroeconomic perspective as opposed to a micro level where most hotel firms operate. in other words, the overall labour index may not properly reflect the state of the human capital in the hospitality industry. as fama and french (1993) stated, their work (ff model) leaves many open questions. the most important missing piece of the puzzle is that fama and french (1993) have not shown how the size and book-to-market factors in security returns are driven by the stochastic behaviour of firm earnings. this implies that it is not yet known how firm fundamentals such as profitability or growth produce common variation in returns associated with size and be/me factors and this variation is not captured by the market return itself. these authors further query whether specific fundamentals can be identified as state variables (variables that describe variation in the investment opportunity set) and these variables are independent of the market and carry a different premium than general market risk. this question is of utmost importance for lodging industry executives who are aiming to identify the major drivers of their companies ' stock returns in their effort to create value for their stockholders. in their current state, the cost of equity models are far from satisfying the needs of the hospitality industry. as fama and french (1997) pointed out, the cost of equity estimates yielded by these models are distressingly imprecise. standard errors of more than 3% per year were typical when the capm and ff models were used to estimate industry costs of equity in their study ( fama and french, 1997 ) . they stated that large standard errors are driven primarily by uncertainty about true factor risk premiums. since the hospitality industry is really the aggregate of individual units that all have their own unique business environments and return on equity structures, this means that the standard errors, and thus, cost of equity capital on a per-company, single-unit (a hotel property or a restaurant) basis, or for a new project will be even more imprecise. therefore, the risk determinants of cost of equity and risk factor loadings for individual operating units will be even more difficult to estimate. thus, it is very important to consider the purpose for which the cost of equity is estimated (e.g., a single project, business division, or an entire corporation). particularly, in the case of single project cost of equity estimations there might be several factors that need to be considered before arriving at the proper discount rate of the project. these factors might be location of the project, local/regional competition, political risk, credit risk, and other risk idiosyncratic to a given project. consequently, as ogier et al. (2004) suggest when estimating a cost of equity for a given project the risk of the project will be much more important than the risk level of the corporation making the investment. in other words, when marriott corporation makes a capital investment decision in nairobi, kenya, the marriott corporation executives will be much more concerned with the risks surrounding that project. unlike cost of equity, cost of debt does not require the use of sophisticated theoretical models. rather, cost of debt is simply the rate at which a given company can borrow capital from a lender (e.g., bank) or the rate at which the aforementioned company can issue bonds. some experts caution that the • • • promised and the expected yields of debt are two different concepts. in other words, when a firm makes contracted debt payments on time it meets " the promised yield " to its lender. however, in reality, there is always a possibility for default and thus the difference between the promised yield and the probability for default equals the expected yield. the expected yield can be regarded as true cost of debt since it is more realistic. although many textbooks calculate the cost of debt as promised yield, it should be noted that expected yield is more meaningful since it includes not only the systematic risk of the market but also the firm-specific risk of a given firm. another challenge for calculating the cost of debt might occur when a firm uses multiple debt instruments (e.g., bank loans, commercial papers, bonds). in this case, it may be fruitful to average the rate of these instruments based on their weight in the debt portfolio. however, an easier and more simplistic approach would be to use the " generic long-term debt " rate which can be calculated from the current rate of a company's bond or current rate at which the company can borrow a longterm loan ( ogier et al ., 2004 ) . last, to estimate the cost of debt, the issue of tax shield should be given a close consideration. for instance, although the majority of the finance textbooks use 35 or 40% as an average for corporate tax rate in the united states, it is common occurrence to observe companies whose effective corporate tax rate is often lower than the statutory rate. here, an executive should assess the situation and decide whether the effective tax rate trend is expected to continue to be below the statutory corporate tax rate in the long term. if that is the case, then he/she should use the effective tax rate in calculating the cost of debt. however, if a low effective tax rate is a short-term occurrence, then a given firm should use the statutory corporate tax rate instead ( ogier et al ., 2004 ) . hospitality industry is part of the overall service sector and is dependent on human capital in order to maintain and grow its operations. in an increasingly competitive environment, the human factor becomes one of the keys in creating sustainable competitive advantage. therefore, murphy (2003) stated that the hospitality industry should learn to view its employees from a new paradigm that human capital is a strategic intangible asset (knowledge, experience, skills, etc.). this implies that, like other assets, it is an important determinant of firm value. however, studies have concluded that " the research of human resources expenditures " is in its infancy and is seriously hampered by the absence of publicly disclosed corporate data on human resources ( lev, 2001 ) . caroll and sikich (1999) argued that keeping track of at least a 3-year history of labour costs would serve to identify the dollar value of " premium " labour-related costs, which could be thought of as all labour/benefit costs above federally mandated minimum wage. other techniques proposed by the authors were (1) to design a scoring system that illustrates productivity versus both baseline and premium labour/benefit costs by departments, and (2) to establish metrics to determine a productivity level for guest experience standards, facilities standards, and targeted revenue improvements on a department-by-department basis. bloxham (2003) advocated adjustments to certain human resource expenditures to capitalize them over the time of the investment. in that approach, one-time human resources costs are amortized and capitalized in the value creation equation in an effort to demonstrate that human capital investments go beyond being a cost item in the firms ' operations. these costs can include recruiting, interviewing, and hiring costs; one-time hiring bonuses and relocation expenses; and training costs. the costs are capitalized and amortized over the average employee tenure with the company. in this case, if employee turnover is high, these costs would be amortized over a shorter time period (thus the costs will be higher), whereas the longer tenure of the workforce will enable the firm to spread the costs over a longer period of time. kalafut and low (2001) reported that in a study of the airline industry conducted by cap gemini ernst & young's center for business innovation (cbi), the employee category was the single greatest value driver that had an impact on the firm's market value. the employee factor had a positive correlation of 0.68 with the firm value. thus, kalafut and low (2001) conclude that in the aggregate, quality and the talent of the workforce, quality of labour management relations, and diversity are critically important in the value creation process of the airline companies. the arguments above can be justified on the grounds that higher-quality human resources decrease labour turnover and increase employee productivity. this results in better organizational performance that results in stabilization of cash flows which in turn decreases the uncertainty of firms ' stock returns. therefore, one would expect that hospitality firms that have institutionalized quality human resource management practices would achieve a more realistic cost of equity estimates that reflect the lower risk associated with these practices. although definitions of the concept of brand differ across the professional and trade literature, the underlying notion is that of a distinctive name with which the customer has a higher level of awareness and a willingness to pay a higher-thanotherwise average price or make a higher-than-otherwise purchase frequency ( barth et al ., 1998 ) . a brand is the product or service of a particular supplier which is differentiated by its name and perceived expectations on the part of the consumer. brands are important and valuable because they provide a " certainty " as to future cash flows ( murphy, 1990 ) . however, since the task of estimating brand value is yet an improbable one, its value is not specifically reflected on the company's balance sheet. yet, the lodging industry has made much of the importance of the value of the brand but has not been able to unequivocally substantiate the role of the brand in reducing the variance in firm cash flows, and thus contributing to lower cost of capital for the firm. srivastava et al. (1998) provided an analytical example of how successful market-based assets (the term authors use in lieu of intangibles) lower costs by building superior relationships with customers, enable firms to attain price premiums, and generate competitive barriers (via customer loyalty and switching costs). all these factors lead to the conclusion that a strong brand reduces the uncertainty pertaining to the future cash flows which in turn decreases the required return by the investors for the risk they bear by investing in a particular firm. in attempts to value the brand in the manufacturing industries, the use of the following methods has been cited by murphy (1990) : • valuation based on the aggregate cost of all marketing, advertising, and research and development expenditures devoted to the brand over a stipulated period. • valuation based on premium pricing of a branded product over a non-branded product. • valuation at market value. • valuation based on various consumer-related factors such as esteem, recognition, or awareness. • valuation based on future earning potential discounted to present-day value. in further analysis, the investigators rejected these methods because, if indeed, brand values were the function of its cost of development, then failed brands would be attributed high values. in addition, brand valuation based solely on the consumer esteem or awareness factor would bear no relationship to commercial reality ( murphy, 1990 ) . in an effort to link the firm's security returns with brand value, simon and sullivan (1993) proposed a technique to estimate the firm's brand equity based on its value. this was done by estimating the cost of tangible assets and then subtracting it from the market capitalization of the firm to obtain the value of intangible assets. as a second step, the researchers tried to break down the intangible assets into brand value and nonbrand value components. the authors utilized the aaker and jacobson (1994) equitrend brand quality measure to evaluate the quality of 100 major brands. they examined associations between measures of brand quality and stock returns and reported that the relationship is positive. according to murphy (1990) , the only logical and consistent way to develop a multiple for brand profit was through the brand strength concept. brand strength is a composite of six weighted factors: leadership, stability, market, trend, support, and protection. the brand is scored on each of these factors according to different weightings and the resultant total known as " brand strength score. " a further addition to the brand strength concept came from prasad and dev (2000) who developed a hypothetical brand equity index via customer ratings of the brand using five key brand attributes in two sets of indicators-brand performance and brand awareness. brand performance was measured by overall satisfaction with the product or service, return intent, price-value perception, and brand preference, while brand awareness was measured as top-of-mind brand recall. olsen (1996) proposed brand-related value drivers specific to the lodging industry such as brand dilution and brand sincerity ratio. brand dilution is related to the question of how many new corporate sub-brands must be introduced in order to maintain growth, whereas, brand duration deals with what percentage of hotels in the portfolio currently meet the brand standards or promise. as a result, it is argued that hospitality companies that possess higher-brand strength will be able to achieve a lower cost of equity capital. according to connolly (1999) , one of the greatest issues plaguing the advancement of technology in the hospitality industry is the difficulty of calculating return on investment. until recently, most technology investment decisions have been considered using a support or utility mentality that stems from a manufacturing paradigm. current policies rely more on faith than on a rational business assessment. as a result, the hotel industry is perceived to be lagging behind the rival industries in the use of technology ( sangster, 2001 ) . in part, this is attributed to the fragmented nature of the hotel business itself; however, it is also believed to be closely related to hoteliers ' lack of experience and understanding in technology investments ( sangster, 2001 ) . connolly further argued that " today's financial models are inadequate for estimating the financial benefits for most of the technology projects under consideration. while the hospitality industry has disciplined models and sufficient history to determine the financial gains or success of opening a new property in a given city, it lacks the same rigorous models and historical data for technology, especially since each technology projects are unique. although this problem is not specific to the hospitality industry, it is particularly problematic since the industry tends to be technologically conservative and unwilling to adopt new technology applications based on the promises of their long-term merits especially if it cannot quantify the results and calculate a defined payback period. when uncertainty surrounds the investment, when the timing of the cash flows is unpredictable, and when the investment is perceived as risky, owners and investors will most likely channel their investment capital to projects with more certain returns and minimal risk. thus, under this thinking, technology will always take a back seat to other organizational priorities and initiatives. efforts must be made to change this thinking and to develop financial models that can accurately predict and capture the financial benefits derived from technology ( connolly, 1999; p. iii) . " although there are no hard and fast rules to facilitate the valuations of technology investments, it is common knowledge that technology is transforming the way business is conducted in the lodging industry. particularly the surge in internet usage in the early years of the new millennium brought about the issue of capacity control for hotel room inventory holders. therefore, firms that are more adaptive to utilize technology to market and sell their perishable product (hotel rooms) may accomplish a lower variation in their future cash flows, since they are able to retain greater control over pricing. the author would like to acknowledge the fact that the body of literature does not offer a direct causal relationship between the cost of equity capital and the technology utilization. however, based on the arguments discussed above, the author contends that firms that invest in technology wisely may achieve a higher average daily rate or revpar in their properties which in turn will lead to a decrease in the variance in firm's cash flows. thus, better utilization of information technology can possibly reduce the uncertainty surrounding the future earnings of the firm. as a result, capital markets will assign a lower risk premium to hospitality firms that successfully utilize and deploy technology into their operations. guest safety and security topics in the lodging industry can vary from building safety codes and bacterial contamination of hotel whirlpools to restaurant food safety and hotel crime statistics ( olsen and merna, 1991 ) . the need for greater commitment to safety and security for the hospitality industry became evident in 1990 after the san francisco earthquake and hurricane hugo occurred ( olsen and merna, 1991 ) . the culmination of these events and all the other events sparked an effort by the hotel industry to manage the risk and liability related to guest safety and security. ray ellis, the director of risk management and operations in the american hotel & motel association (at that time in 1991), contended that after the end of the gulf war the benefits of increased security for the industry go far beyond intangibles such as peace of mind ( jesitus, 1991 ) . ellis stressed that improved safety and security will significantly decrease the insurance premiums of the properties, and thus enable the companies to have more resources to invest in their operations. although ellis said that chances of terrorist attacks on the united states post gulf war were fairly remote, he warned that the hotels, particularly those serving international markets, be most wary of arson and bomb threats. the international hotel and restaurant association in 1995 identified safety and security as one of the major forces driving change in the global hospitality industry ( olsen, 1995 ) . with the destruction of the world trade center in 2001, and subsequent terrorist attacks in bali and kenya, it is clear that force has emerged now as a major risk factor for all tourismrelated enterprises. in february 2003, the federal bureau of investigation (fbi) alerted its law enforcement partners that " soft targets, " such as hotels, can be subject to terrorist attacks ( arena et al. , 2003 ) . this report simply reaffirms the argument proposed by olsen (1995 olsen ( , 2000 that lodging properties which are situated in an area exposed to terrorist attacks, should factor that risk into their cost of capital estimates. therefore, lodging property executives should apply this risk factor into their future capital investment decisions. in addition, outbreaks related to food-borne diseases, infectious bacteria occurrences on cruise ships, increased crime, and the growing threats of human immunodeficiency virus (hiv), and other viral infections such as severe acute respiratory syndrome (sars) have created a significant challenge for hospitality managers worldwide. these must be considered as important risk variables that will no doubt have an impact on the estimates of cost of capital. although the factors mentioned above are critical in estimating the cost of capital of a given project, there are no methods that can quantify these factors and apply them to the cost of equity models. however, executives are advised to consider these industryspecific risk factors before making a capital investment decision. the models covered thus far do not provide any guidance for estimating the cost of equity in a global setting or multinational projects. in order to fill this void, academics and practitioners developed adjustment models to account for differences in cost of equity among markets in developing and emerging countries. the adjustment models are primarily concerned with whether the emerging markets are segmented or integrated with the world markets. that is, in a completely segmented market, assets will be priced based on local market return. the local expected return is a product of the local β times and the local market risk premium (mrp) ( bekaert and harvey, 2002 ) . bekaert and harvey (2002) developed a modified model after researching 18 emerging markets for the pre-1990 and post-1990 periods and reported that the correlation of the emerging markets with the morgan stanley capital international (msci) world index increased noticeably. for instance, turkey is one of the countries whose market correlation with msci world index increased from less than 0.10 to more than 0.35. based on this, turkey may be considered an integrated capital market where the expected return is determined by the β with respect to the world market portfolio multiplied by the world risk premium. this is the core argument of the bekaert-harvey mixture model ( bekaert and harvey, 2002 ) . in cases when integrated markets assumption does not apply, investment banks and business advisory firms use a method called " the sovereign spread model (goldman model). " this is conducted by regressing an individual stock against the standard & poor's 500 stock price index returns to obtain the risk premium. then, an additional " factor " is added which is called the " sovereign spread " (ss). this spread between respective country's lgb for bonds denominated in u.s. dollars and the u.s. treasury bond yield is " added in. " the bond spread serves as a tool to increase an " unreasonably low " country risk premium ( harvey, 2005 ) . this section offers a practical example for managers to estimate the wacc of their projects. in addition, this section breaks down the wacc into its respective components in order to assist executives in the capital investment decisions. the major components of the wacc estimations are a firm's stock return, market return, risk-free rate, regression coefficients ( β , s , and h ), smb, hml and equity market risk premium (emrp) (which is r m ϫ r f ), capital structure (proportion of debt and equity), corporate tax rate, and cost of borrowed debt. if you are an executive of a company that is not publicly traded, you have two options to estimate the cost of equity. you can either use the industry average for cost of equity or locate two or three comparable firms that compete in the same line of business and estimate their cost of equity. however, even if you are an executive of a large restaurant corporation that is traded publicly, it is still recommended that you estimate the cost of equity for the entire restaurant industry because the standard error of regression coefficients for a single firm is fairly high, which decreases the reliability of these coefficients. my past research experience has showed me that at times using a single firm may create a situation in which cost of equity cannot be even estimated. more often than not, i obtained distressing results when running a regression for small-or medium-size hospitality firms. as a result, in the practical example, i will estimate the restaurant industry's cost of equity. since the cost of equity calculation process may be a fairly complex process for someone who is not familiar with data analysis, i will offer a step-by-step procedure, which should better clarify this process: step 1: obtaining a 5-year monthly stock return for your company/industry and the market • • • ideally, you need 5 years of monthly stock return data for your firm and the 5-year market return. the issue of selecting the best index of all traded assets in the world is a very challenging and sometimes a controversial issue. based on seminal • • • studies in financial management, the market index that yields most reliable results in the united states is center of research in security prices value weight (crspvw) index housed at the university of chicago. both your company's stock and market return should be used as excess return (i.e., return less risk-free rate which is 1-month tb rate) in order to measure the cost of equity in real units (i.e., after accounting for inflation). for reasons mentioned before, i will be estimating the u.s. restaurant industry's cost of equity and leave the decision to restaurant industry executives to adjust this value to their specific projects at hand. in order to be able to observe the accuracy of cost of equity models, we estimate the restaurant industry cost of equity by using the capm and ff model. the observation period of this example is between 2000 and 2004. the reason for not selecting a longer observation period is that the values of β and other variables become unstable over extended periods. the sample is developed from the nation's restaurant news (nrn) index, which entails 81 restaurant firms. in cases when executives are not familiar with building stock portfolios, they can alternatively use monthly returns of hospitality indices for lodging and restaurant industries from data providers such as yahoo! finance, wall street journal, or industry publications such as nrn. step 2: estimating β and fama-french factor coefficients • • • the capm's β can be computed by regressing excess stock return of a firm over the excess market return. the monthly returns for ff factors (smb and hml) can be retrieved from eventus database housed in the wharton school at the university of pennsylvania or from kenneth french's website at dartmouth college. by regressing monthly smb and hml returns on market returns you can obtain " s " and " h " coefficients that can later be inserted into the equation to estimate the cost of equity. in our practical example, the results indicate that the ff model explains more than half (51.8%) of the variation in the returns of the nrn index. in addition, the ff model results in a significant r 2 change over the capm, which showed that the two ff variables (smb and hml) explained some extra variance over and above the capm which accounted for 19.6% of the variation in the restaurant industry stock returns. the analysis at the variable level indicates that the market index variable ( β ) and the hml are significant at 0.01 level (see table 6 .1 ). however, the smb was not significant at the 0.05 level, which means that the size factor does not affect the restaurant industry stock returns while controlling for β and hml. in practice, this means that restaurant industry portfolio behaves as a large company stock, and therefore there is no size premium when considering the overall cost of equity for the restaurant industry. it should be remembered that if you are an executive of a small restaurant company there is a high possibility that your stock returns will have a size premium. step 3: the risk-free rate, market, size and distress premiums • • • there are certain rules of thumb that executives should be aware of before inserting the regression coefficients into the cost of equity calculation. first, it should be pointed out that there are two risk-free rates ( r f ) in the capm and ff models. the first r f is used in order to demonstrate the level of risk-free rate that a firm needs to exceed to compensate its investors for the risk they undertake. the second r f should ideally match the life of an asset. in other words, if the asset in this project is expected to last at least 10 years, then a given investor/executive should use a 10-year government bond as its risk-free rate to obtain the mrp ( r m ϫ r f ). another important issue is calculating market, size and distress premiums. executives/investors may often face challenges when the 5-year mrp (which equals r m ϫ r f ) is negative or extremely low, or when size premium (smb) and distress premium (hml) figures are negative. in these cases, i would recommend that executives/investors use the longterm equity premium ( r m ϫ r f ) figure of 5% ( siegel, 1998 ) , 1992-2001, 1993-2002, 1994-2003 , and so on) until 2006 and verified that in all instances smb, and hml premiums were positive. step 4: solving cost of equity equation • • • since the market index (vwcrsp) has a very low return (0.21%) for the 5-year period, i will use the long-term equity premium of 5% ( siegel, 1998 ) . next, by using the obtained regression coefficients in table 6 .1 , the regression equations provide the following results: as it can be seen from the results above, the restaurant industry cost of equity is considerably higher when estimated by using the ff model. in basic terms, this means that a hypothetical investor will expect a return of 18% from the u.s. restaurant industry in order to invest his/her funds in the u.s. restaurant portfolio. however, if a restaurant executive believes that 18% is a fairly high rate of return and his/her restaurant company does not have the same risk profile as the overall u.s. restaurant industry, he/she may elect to use the average of the capm and ff estimates, which is around 12%. next, a restaurant executive may adjust the rate of his/ her firm's project by considering whether the project will be riskier than the restaurant industry's expected return. here one should consider factors such as competition, life of the project, and the events that may have an impact on the risk of the project by influencing forces driving change in firm's external (e.g., economic, political, technological) and internal (e.g., industry, local) environment. the next step in estimating the cost of capital is to estimate the cost of debt. unlike cost of equity, cost of debt does not require consideration of the average cost of debt for the hospitality industry. this is because in simple terms, cost of debt denotes an interest rate at which a given company can borrow. therefore, a given company can calculate the cost of debt for a given project in a relatively simple manner. the situation is little more complex in cases when a corporation has multiple projects to invest in and has to estimate its corporate cost of debt. this is because some of the projects may be expansion projects that are already financed by loans obtained in the past. consequently, executives need to average out the interest rate of the outstanding debt related to this project and also consider the interest rate at which the company can borrow new funds. in this particular example, we will assume that a hypothetical company plans to issue bonds which mature in 10 years and will also secure a 10-year loan to finance a portion of the project. in this scenario, we assume that both the bond issuance and the loan will have equal contribution to the funding of the project (e.g., 50% each). let us assume that the hypothetical company in this example issues 10-year bonds whose expected yield-to-maturity is 8%. this rate is assumed based on the present bond rating of this company. we also assume that the rate of a 10-year bank loan is 7% and the corporate tax rate 38%. thus, the cost of debt can be calculated as follows: before entering the values from previous sections we assume that the current project will be financed with 60% equity and 40% debt. we use the average cost of equity estimate (12.25%) and the cost of debt (4.65%) we obtained before. consequently, the weighted cost of capital for this project can be calculated as follows: . % 12 25 0 6 4 65 0 4 7 5 2 16 9 68 it should be noted that the executive of this hypothetical firm needs to make adjustments to this project if the project carries any specific risk such as political risk, divisional risk (if the firm has multiple divisions), risk of early termination, stiff competition, and so on. this section considers a case when the cost of equity needs to be estimated for an international project. here i use a hypothetical scenario where a thai investor plans to make a hotel in line with the suggestions made by annin (1997) , and barad and mcdowell (2002) , a minimum of 36 months ' stock market trading is the criterion for a hospitality firm to be included in the turkish tourism index. in addition, crspwv index is used as a market portfolio index for the united states. this is in congruence with the previous seminal studies related to asset pricing models ( fama and french, 1992 , 1997 jaganathan and wang, 1996 ) . however, imkb ulusal 100 index is utilized as a market portfolio for turkey. β is computed by regressing excess return of the four seasons and turkish tourism index over the excess market return; therefore, both variables are analysed in real units (e.g., after subtracting inflation). excess market return (mrp) for the united states is computed by subtracting 1-month tb rate from the monthly vwcrsp index return. the mrp for turkey is calculated by subtracting the turkish government's tb from the monthly ise ulusal 100 index return. the data for the five apt variables are obtained from global insight database. the apt variables are calculated as in chen et al. (1986) . einf is estimated following the method of fama and gibbons (1984) . country risk premium is adapted from aswath damodaran at new york university. damodaran (2006) explains the estimation procedure as " to estimate the long term country risk premium, i start with the country rating (from moody's: www.moodys.com ) and estimate the default spread for that rating (us corporate and country bonds) over the treasury bond rate. this becomes a measure of the added country risk premium for that country. i add this default spread to the historical risk premium for a mature equity market (estimated from us historical data) to estimate the total risk premium. " both direct and indirect approaches are used to estimate the expected return (indirect and direct) of an investment. in this method, i first compute the expected rate of return for the u.s. stock (in this case four seasons) by using the average estimates for the capm and apt. then i adjust for country risks of turkey and thailand based on moody's country risk ratings as reported by damodaran (2006) . this method assumes that the turkish stock market is integrated and thus using the u.s. market indices to estimate the cost of equity for four seasons is equivalent to using ulusal 100 market index for the turkish tourism portfolio. first, i run a regression of the monthly returns of four seasons over the crspvw return for the 2001-2005 period. the results show that the β for four seasons is 1.6. next, the 5-year annualized return for the crsp was calculated in order to estimate the mrp. the 5-year historical return for crsp was 4.3%. the riskfree rate for the 2001-2005 period was 2.16%. as a result, the cost of equity estimate based on the capm for four seasons is as follows: . % ϭ ϩ ϫ ϫ ϭ 2 1 1 6 4 3 2 1 5 4 in an effort to have less biased estimates, i also use the five apt variables (chen et al. , 1986) to calculate the expected return for four seasons. the results reveal that, among the five apt variables, only the default risk variable (upr) is significant at the 0.05 level. however, it is not feasible to use this variable to estimate the expected return because the regression coefficient for upr is a negative number. as a result, the four seasons is likely to have a negative expected return based on the apt. as a consequence, i elect not to use the apt results in the final stage of the direct approach, since the results of the apt are in conflict with the contemporary financial theories. therefore, i use the capm's estimate of 5.4% and adjust this estimate with the country risk of turkey and thailand. according to damodaran (2006) , the historical risk premium for the united states is 4.80%. turkey's country risk premium is 5.60% above the united states value and that for thailand is 1.65% above the risk premium for the united states. this denotes that turkey's country risk premium is 3.95% over that of thailand. these figures result in an expected return of 9.35% (5.4 ϩ 3.95%) for the thai entrepreneur who is undertaking an equity investment in a hotel in turkey. in the direct approach, i estimate the nominal required rate of return for the portfolio of turkish tourism and hospitality stocks. as a next step, i adjust for the sovereign spreads of turkey and thailand as it is assumed that the thai investor will repatriate the returns from an investment to his/her home country. in this method, i regress the monthly return of the turkish tourism index over the return of the ise. the β for the tourism index was merely 0.17. the 5-year average for the risk-free rate (turkish government's tb) for the 2001-2005 period was 46.4%. the annualized return of the market index (ise) for the 2001-2005 period was 37.7%. the expected return for the tourism portfolio was calculated by applying the capm and it provided the following results: . % ϭ ϩ ϫ ϫ ϭ ϩ ϭ 37 7 0 17 46 4 37 7 37 7 1 5 39 2 the next step entails the addition of the sovereign spread between thailand and turkey to arrive at the estimate for the cost of equity capital for the thai investor. the sovereign spreads are obtained from fuentes and godoy (2005) . the spread for turkey was 11.875% and that of thailand 7.750%. based on these figures, the cost of equity for the direct approach was 43.3% (39.2 ϩ 4.1%). as it can be seen from both the examples of cost of equity estimation (the united states and international), the expected returns (costs of equity) varied widely. in the example of united states, the use of the capm resulted in a cost of equity that was fairly low (less than 6%). it is worth asking, would a given investor invest in a u.s. restaurant portfolio of stocks for less than 6% a year? the answer would probably be " no. " however, if one elects to use ff as its main cost of equity model then the possibility of obtaining more relevant results is likely to increase. as it can be seen in this example, the cost of equity by using the ff model yielded a fairly logical return which far exceeds the historical equity premium for the united states. for the international example, one of the main reasons for the stark difference in cost of equity estimates using the two approaches (direct and indirect) is the high historical inflation in turkey. this is demonstrated by the gap in the tb rates for this country (82.3% for 2001 and 16.3% for 2005). hence, if a hypothetical investor elects to use the " going-rate (16.3%) in 2005 then the new expected return for the turkish tourism portfolio would be at least twice lower than the original estimate of 43.3%. another challenge in the direct approach for international cost of equity estimations is the low β estimate for the turkish tourism portfolio (0.17). does this mean that the tourism portfolio is five times less risky than the overall ise index? what if the real risk of tourism stocks is twice higher than that of the market? (this is quite likely as the β for four seasons in the united states was 1.6.) if that is the case, then the thai investor needs to require a rate of return that is more than 50% in thai currency. how can the investor hedge his investments against the large swings in the cost of equity estimates? as the results indicated thus far, cost of equity estimations for hospitality investments in emerging and developed markets are beset with uncertainty. the main shortcomings stem from the challenge of applying the seminal models such as the capm, ff, and the apt. the second set of challenges arises when countries such as turkey tend to have high historical rates of inflation but now are entering a more stabilized period of fiscal reforms. thus, should an investor use the historical data or try to forecast the future interest rates in turkey? although the practical examples provided some answers to these questions, few more questions are left for future research. hence, i suggest two interim solutions for this cost of equity conundrum in the emerging markets: (1) the investors and academics should either solely focus on future cash flows of the project, or (2) use simulations such as monte carlo in order to create multiple scenarios that approximate the investment realities of the emerging markets. otherwise, the expected return remains to be a " gut feeling " estimate for foreign investors in emerging markets. the financial information content of perceived quality why do firms reduce business risk? fama-french and small company cost of equity calculations preparations for possible attacks gear up: new flight restrictions planned around washington ability, moral hazard, firm size, and diversification technical analysis of the size premium. business valuation alert capturing industry risk in a buildup model use of macroeconomic variables to evaluate selected hospitality stock returns in the u brand values and capital market valuation research in emerging markets finance: looking into the future modern financial theory, corporate strategy and public policy: three conundrums economic value management: applications and techniques what is your irr on human capital? towards a strategic theory of risk premium: moving beyond capm the impact of macroeconomic and non-macroeconomic forces on hotel stock returns economic forces and the stock market the restaurant industry, business cycles, strategies, financial practices, economic indicators, and forecasting. unpublished dissertation understanding information technology investment decision making in the context of hotel global distribution systems: a multiple-case study . unpublished dissertation capital market equilibrium with transaction costs valuation: measuring and managing the value of companies country default spreads and risk premiums increasing returns: a theoretical explanation for the demise of beta the cross section of expected stock returns common risk factors in the returns on stocks and bonds size and book-to-market factors in earnings and returns industry costs of equity risk, return and equilibrium: empirical tests a comparison of inflation forecasts analysis of pure play technique in the hospitality industry sovereign spreads in emerging markets: a principal components analysis principles of managerial finance introduction to investment theory (hyper textbook). retrieved the investment, financing, and valuation of the corporation the theory and practice of corporate finance: evidence from the field twelve ways to calculate the international cost of capital tax attributes as determinants of shareholder gains in corporate acquisitions vertical integration and risk reduction techniques of financial analysis: a guide to value creation conditional capm and cross section of expected returns valuation in emerging markets march 11. safety and security: risk management, threat of terrorism top hoteliers ' concerns in 1991 the value creation index: quantifying intangible value the effects of management buyouts on operations and value systematic risk, total risk and size as determinants of stock market returns intangibles: management, measurement, and reporting the valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets extending modern portfolio theory into the domain of corporate diversification: does it apply? risk, strategy, and finance: unifying two world views (editorial) . long range planning cost of equity conundrum in the lodging industry: a conceptual framework portfolio selection portfolio selection: efficient diversification of investments incentives for diversification and the structure of the conglomerate firm assessing the value of brands a proposed structure for obtaining human resource intangible value in restaurant organizations using economic value added the real cost of capital: a business field guide to better financial decisions into the new millennium: the iha white paper on the global hospitality industry: events shaping the future of the industry global hotel finance--the future . one-day program co-sponsored by hong kong shanghai bank corporation, deloitte & touche consulting group and richard ellis international property consultants leading hospitality into the age of excellence: competition and vision in the multinational hotel industry march 11. trends in safety & security . hotel and motel management strategic management in the hospitality industry managing brand equity: a customer-centric framework for assessing performance model estimates financial impact of guest satisfaction efforts cost of capital: estimation and applications a new empirical perspective on the capm on the cross-sectional relation between expected returns and betas the capital asset pricing model and the market model persuasive evidence of market inefficiency the arbitrage theory of capital asset pricing technology: the importance of technology in the hotel industry capital asset prices: a theory of market equilibrium under conditions of risk an empirical analysis of anomalies in the relationship between earnings ' yield and returns of common stocks: the case of lodging and hotel firms stocks for the long run the measurement and determinants of brand equity: a financial approach corporate ownership structure and performance: the case of management buyouts marketbased assets and shareholder value: a framework for analysis passing the baton: managing the process of ceo succession key: cord-253135-0tun7fjk authors: robin, charlotte; bettridge, judy; mcmaster, fiona title: zoonotic disease risk perceptions in the british veterinary profession date: 2017-01-01 journal: prev vet med doi: 10.1016/j.prevetmed.2016.11.015 sha: doc_id: 253135 cord_uid: 0tun7fjk in human and veterinary medicine, reducing the risk of occupationally-acquired infections relies on effective infection prevention and control practices (ipcs). in veterinary medicine, zoonoses present a risk to practitioners, yet little is known about how these risks are understood and how this translates into health protective behaviour. this study aimed to explore risk perceptions within the british veterinary profession and identify motivators and barriers to compliance with ipcs. a cross-sectional study was conducted using veterinary practices registered with the royal college of veterinary surgeons. here we demonstrate that compliance with ipcs is influenced by more than just knowledge and experience, and understanding of risk is complex and multifactorial. out of 252 respondents, the majority were not concerned about the risk of zoonoses (57.5%); however, a considerable proportion (34.9%) was. overall, 44.0% of respondents reported contracting a confirmed or suspected zoonoses, most frequently dermatophytosis (58.6%). in veterinary professionals who had previous experience of managing zoonotic cases, time or financial constraints and a concern for adverse animal reactions were not perceived as barriers to use of personal protective equipment (ppe). for those working in large animal practice, the most significant motivator for using ppe was concerns over liability. when assessing responses to a range of different “infection control attitudes”, veterinary nurses tended to have a more positive perspective, compared with veterinary surgeons. our results demonstrate that ipcs are not always adhered to, and factors influencing motivators and barriers to compliance are not simply based on knowledge and experience. educating veterinary professionals may help improve compliance to a certain extent, however increased knowledge does not necessarily equate to an increase in risk-mitigating behaviour. this highlights that the construction of risk is complex and circumstance-specific and to get a real grasp on compliance with ipcs, this construction needs to be explored in more depth. veterinary professionals can encounter a variety of occupational health risks. a high prevalence of injury has been reported, predominantly in relation to large animal work (beva, 2014; fritschi et al., 2006; lucas et al., 2009) , dog and cat bites and/or scratches and scalpel or needle stick injuries (nienhaus et al., 2005; phillips et al., 2000; soest and van fritschi, 2004) . in addition to the risk of injury, the profession is also at risk of other occupational hazards including exposure to chemicals, car accidents (phillips et al., 2000) and infec-tious diseases from zoonotic pathogens (constable and harrington, 1982; dowd et al., 2013; epp and waldner, 2012; gummow, 2003; jackson and villarroel, 2012; lipton et al., 2008; weese et al., 2002) . work days lost because of zoonotic infections are less frequent than days lost to injury (phillips et al., 2000) ; however, because of the potential seriousness of some zoonotic infections and increasing reports of occupationally-acquired antimicrobial resistant bacteria in veterinary professionals (cuny and witte, 2016; groves et al., 2016; hanselman et al., 2006; jordan et al., 2011; weese et al., 2006) , zoonotic risk in the veterinary profession deserves attention. there are no recent data on the risk of zoonotic infections in the british veterinary profession. one study published over 30 years ago estimated 64.1% of veterinary surgeons working for government agencies reported one or more zoonotic infections during their career (constable and harrington, 1982) . research from veterinary populations overseas indicates a substantial risk of http://dx.doi.org/10. 1016/j.prevetmed.2016.11 .015 0167-5877/© 2017 elsevier b.v. this is an open access article under the cc by license (http://creativecommons.org/licenses/by/4.0/). infection within the profession, with incidence of reported infections during their career ranging from 28% in the united states (lipton et al., 2008) , 45% in australia (dowd et al., 2013) , 47.2% in canada (jackson and villarroel, 2012) to 64% in south africa (gummow, 2003) . in both medical and veterinary professions, infection prevention and control (ipc) practices are fundamental to reduce the risk of healthcare-associated infections in patients, as well as occupationally-acquired infections in practitioners. in the united kingdom (uk), universal and standard precautions are recommended by the department of health. in human medicine, research has highlighted sub-optimal compliance with ipc practices. in one uk study, observed hand hygiene adherence in nurses was 20.4% and 60.1%, before and after contact with patients, respectively. in doctors in the same study, the compliance was much lower, at 8.1% and 51.4%, before and after patient contact (jenner et al., 2006) . non-adherence to guidelines is a global issue, with reported hand hygiene compliance rates of 58% in hospitals in finland (laurikainen et al., 2015) , 41.2% in an infectious diseases care unit in france (boudjema et al., 2016) and 40% in paediatric hospitals in new york (løyland et al., 2016) . in veterinary medicine in the uk, there are no enforceable national policies for ipc practices. for veterinary practices in the royal college of veterinary surgeons (rcvs) accreditation scheme, guidelines are available and specific standards have to be met to retain accreditation status. only 51% of practices are members of the accreditation scheme (rcvs, 2014) and although guidelines and recommendations are available for non-members, they tend to be practice-specific. additionally, the emphasis is on patient, rather than practitioner health. other countries have developed national standards for ipc in veterinary medicine, specifically related to occupationallyacquired zoonotic infections. these include the australian veterinary association guidelines for veterinary personal biosecurity and the compendium of veterinary standard precautions for zoonotic disease prevention in veterinary personnel, developed by the national association of state public health veterinarians in the united states (nasphv). even when national guidelines exist, not all practices have ipc programmes (lipton et al., 2008; murphy et al., 2010) . where effective procedures and resources are available, their effectiveness is dependent on uptake (dowd et al., 2013) . decision-making surrounding ipc practices will depend on a number of different factors. there are few data available focussing on awareness and perceptions of zoonotic diseases within the veterinary profession in the uk, however from studies that have been conducted overseas it appears that awareness is poor and compliance with ipc guidelines is low (dowd et al., 2013; lipton et al., 2008; nakamura et al., 2012; wright et al., 2008) . in a survey of american veterinary medicine associationregistered veterinary surgeons, under half (48.4%) of small animal vets washed or sanitised their hands between patients and this proportion was even lower in large and equine vets (18.2% for both). in addition, only a small proportion of large and equine vets washed their hands before eating, drinking or smoking at work (31.1% and 28.1%, respectively), compared with 55.2% in small animal vets. veterinary surgeons who worked in a practice that had no formal infection control policy had lower awareness, as did male veterinary surgeons (wright et al., 2008) . in a smaller survey of american veterinary professionals, although 77% of respondents agreed it was important for veterinary surgeons to inform clients about the risk of zoonotic disease transmission, only 43% reported they initiated these discussions with clients (lipton et al., 2008) . in a study of veterinary technicians and support staff, only 41.7% reported washing their hands regularly between patients (nakamura et al., 2012) . in a sample of australian veterinary surgeons, 43.4% wore no personal protective equipment (ppe) for handling clinically sick animals and the majority (67.4%) wore inadequate ppe for handling animal faeces and urine (dowd et al., 2013) . in the veterinary profession, the dichotomy between a professional status and increased risk of infection has been viewed as counterintuitive (baker and gray, 2009) , as it could be expected a comprehensive understanding of zoonotic disease risks would manifest in more risk-averse behaviour. in both medical and veterinary medicine, education has been identified as a key intervention to increase compliance (dowd et al., 2013; ward, 2011) ; however good knowledge does not necessarily lead to good practice (jackson et al., 2014) . compliance is influenced by many factors, including motivation, intention, social pressure and how individuals understand or 'construct' risk (jackson et al., 2014) . understanding of risk and why people engage in risk-mitigating behaviour (or not) is complex and perceived knowledge of the disease is only one factor that should be considered. a better understanding of how veterinary professionals in britain understand the risks surrounding zoonotic diseases will aid in the development of effective and sustainable ipc practices, reducing the risk of zoonotic infections within the profession. this paper examines how the veterinary profession in britain understand zoonotic risk and motivators and barriers for using ppe. a cross-sectional study was conducted october to december 2014; the sampling frame was all 3416 veterinary practices in great britain registered in the rcvs database. the rcvs database holds information on registered veterinary businesses, including private practice, referral hospitals, veterinary teaching hospitals and veterinary individuals. sample size calculations indicated that information from 348 veterinary practices was required for an expected prevalence of 50%, with a precision of 5%. assuming a 30% response rate, 1000 practices were selected from the rcvs database by systematically selecting every third practice. the principle veterinary surgeon and head nurse were identified at each practice using the rcvs register and sent a postal questionnaire. a total of 2000 questionnaires were posted to 1000 veterinary practices. for non-responders, reminder emails were sent out from four weeks after the initial posting and a second reminder, including an electronic copy of the questionnaire was sent out a further four weeks after the first reminder, to any remaining non-responders. the questionnaire was developed based on a similar study in australian veterinary professionals (dowd et al., 2013) and a larger, multi-country risk perception study on severe acute respiratory syndrome (de zwart et al., 2009) . the questionnaire was an a4 8page booklet (available in supplementary information), containing four sections including veterinary qualifications and experience, disease risk perceptions, infection control practices and management of zoonotic diseases. the questionnaire included both closed and open-ended questions and was piloted on a small convenience sample of veterinary surgeons, but not veterinary nurses, prior to being finalised. questionnaires were designed in automatic data capture software (cardiff teleform v 9.0), which allowed completed questionnaires to be scanned and verified and the data imported directly into a custom-designed spreadsheet (microsoft excel, redmond, wa, usa). the clinical scenarios respondents were asked to assess the risk from included contact with animal faeces/urine; contact with animal blood; contact with animal saliva or other bodily fluid; performing post mortem examinations, assisting conception and parturition for animals, contact with healthy animals; contact with clinically sick animals and accidental injury. * post mortem examination. descriptive statistics were performed using commercial software (ibm spss version 22, armonk, ny, usa). proportions were calculated for categorical data; median and interquartile ranges (iqr) for continuous data. a "risk perception score" was calculated as the mean value of the scores (high risk = 3; medium risk = 2; low risk = 1), based on the participant's opinion of the risk (high, medium or low) of contracting a zoonosis from eight different clinical scenarios detailed in fig. 2 . scores for ppe use in five clinical scenarios were calculated using pearson's correlation coefficient to compare reported use of gloves, masks and gowns/overalls to the recommendations in the nasphv guidelines. these guidelines were chosen because no uk equivalent that applies across all veterinary species could be found, but the nasphv standards are likely to be considered as reasonable levels of protection in the uk situation. the clinical scenarios included handling healthy animals (no specific protection advised: possible scores 0-3); handling excreta and managing dermatology cases (gloves and protective outerwear advised: possible scores −2 to 1); performing post mortems and performing dental procedures (gloves, coveralls and masks advised: possible scores −3 to 0). a score of 0 indicated compliance, <0 indicated less ppe than recommended was used and >0 more ppe than recommended was used. redundancy analysis (rda) was used to determine if demographic or other factors accounted for any observed clustering of the motivators or barriers to use of ppe, or for the reported ppe use in different scenarios. redundancy analysis is a form of multivariate analysis that combines principal component analysis with regression, to identify significant explanatory variables. this was performed using the r package "vegan" (oksanen et al., 2016) , based on the methods described by (borcard et al., 2011) . the adjusted r 2 value was used to test whether the inclusion of explanatory variables was a significantly better fit than the null model and a forward selection process was used to select the significant variables that explained the greatest proportion of the variance in the response data (borcard et al., 2011) . permutation tests were used to test how many rda axes explained a significant proportion of the variation. barriers and motivators to use of ppe were assessed by asking respondents to grade the influence of certain factors on their use of ppe (see fig. 4 for a full description of the barriers and motivators). the response options "not at all", "a little" and "extremely" were ranked as 0, 1 and 2, respectively. redundancy analyses, as described above, were used to determine if demographic or other factors accounted for any observed clustering of a) barriers or b) motivators to use of ppe. explanatory variables investigated were gender, age, length of time in practice, position (veterinary surgeon or nurse; owner or employee); type(s) of veterinary work undertaken (small, large/equine or exotics/wildlife); previous experience of treating a zoonotic case; level of concern over risk (for themselves or clients). additional explanatory variables investigated in the redundancy analysis for reported ppe use were the barrier and motivator scores and the attitude and belief scores (described below). participants were also asked about their level of agreement with certain statements describing their attitudes and beliefs around zoonotic disease risk and ppe use (see fig. 5 for a full description of the statements); the responses "disagree", "agree" and "strongly agree" were scored as −1, 1 and 2, respectively. principal component analysis was used to investigate clustering of these "attitude" statements. as only two axes contributed variation of interest (according to the kaiser-guttman criterion, which compares each axis to the mean of all eigenvalues), the attitude statements were grouped into two subsets; those that contributed principally to pca1 (seven statements) and those that contributed to pca2 (three statements). cronbach's alpha was calculated on these subsets of the attitude statements, using the "psy" package in r (falissard, 2011) , to test whether any of these variables may indicate an underlying latent construct. where correlation was judged to be acceptable or better (cronbach's alpha coefficient >0.7), the principal component scores were used as a proxy measure for this latent construct. potential explanatory variables, including the same demographic variables used for the redundancy analyses, and responses to motivators and barriers, were tested using linear regression modelling. multivariable regression models were fitted using the base and stats packages in r software (r core team, 2015). a manual stepwise selection of variables was performed based on knowledge of expected potential associations and confounders that made biological sense. variables were added one by one to the null model. two-way interactions were tested and variables or interactions were retained if likelihood ratio tests showed a significant improvement in model fit (p < 0.05). non-significant variables were removed, including variables that later became non-significant when additional variables were added. over the 12-week study period, a total of 252 useable questionnaires were returned from the invited individuals, giving an overall response rate of 12.6%. for a number of questions, there were some missing data; therefore the denominator for all results was 252 unless otherwise stated. a summary of demographic characteristics of the respondents is presented in table 1 . the majority of respondents had managed a zoonotic case within the 12 months prior to completing the questionnaire (93.1%; n = 230/247). the most commonly reported infections treated were campylobacter (n = 111), dermatophytosis (n = 99) and sarcoptes scabeii (n = 86). overall, 24.6% (n = 62/248) of respondents reported they had previously contracted at least one confirmed occupationallyacquired episode of zoonotic disease. when including suspected zoonotic diseases, this increased to 44.7% (n = 111/248). the most common zoonotic disease experienced by respondents who reported confirmed or suspected zoonotic infection was dermatophytosis (58.6%; n = 65/111). the relative frequency of reported zoonotic infections (confirmed and suspected) is reported in fig. 1 , showing the reported frequency in respondents who had qualified or practised outside of britain, compared with veterinary professionals with exclusively british experience. overall, the majority (57.5%; n = 145/251) of respondents were not concerned that they or their colleagues would contract an occupationally-acquired zoonotic disease, however a considerable proportion were (34.9%; n = 88/251). only a small proportion (7.1%; n = 18/251; 4.0-10.4) stated they had not thought about the risk of infection. in total, 84.6% (n = 209/247) of respondents agreed or strongly agreed they had a high level of knowledge regarding zoonotic diseases. based on the eight different clinical scenarios respondents were asked to assess, the highest risk situation for zoonotic disease transmission was considered to be accidental injury, such as a needle stick injury, bite or scratch. coming into contact with animal faeces/urine was also considered high risk for zoonotic disease transmission. these scenarios were classified as high risk by 18.3% (n = 46/245) and 17.1% (n = 43/246) of respondents, respectively. the aspect of the job considered to represent the lowest risk of exposure to zoonoses was contact with healthy animals, with 83.3% (n = 210/250) of respondents considering this to involve low risk of exposure to disease (fig. 2) . the amalgamated risk perception scores ranged from 1 (all scenarios considered low risk) to 3 (all scenarios considered high risk), with a median of 1.5 (iqr 1.25-1.75). the majority of respondents reported they were aware of their practice having standard operating procedures (sops) related to infection control practices (75.0%; n = 189/236). all workplaces provided ppe for members of staff, although 12.3% did not provide training on how to use it. the majority provided separate eating areas (92.9%; n = 234/247) and restricted access from staff and visitors to patients in isolation (92.5%; n = 225/233). when asked about what level of ppe was used in five different clinical settings, 68.3% (n = 168/246) reported they would not use any specific ppe for handling healthy animals, in line with the nasphv guidelines. when handling dermatology cases, 23% (n = 56/243) reported using no ppe. only 2.4% (n = 8/331) reported not using any ppe for handling urine or faeces; one respondent did not use any ppe for post mortem examination (n = 230; 0.4%), and 2% (n = 5/244) did not use any for performing dentistry work. correlation between the ppe scores for the different scenarios was low, the greatest correlation (r = 0.39) was between the scores for handling excreta and for handling dermatology cases. there was no evidence that respondents who wore more ppe than required in the guidelines (i.e. gloves and/or masks) for handling healthy animals would correctly select the appropriate level of ppe (i.e. gloves, masks and a protective coverall) for post mortem or dentistry. a redundancy analysis indicated that greater ppe use (a higher ppe score) was negatively correlated with a fatalistic attitude for the two higher risk scenarios. belief that sops acted as a motivating factor to use ppe and agreement that "i consciously consider using ppe in every case i deal with" were positively correlated with greater ppe use in dermatological cases, handling healthy animals and excreta (fig. 3) . all respondents indicated that perceived risk would have some effect on their motivation to use ppe, either a little (n = 63/248; 25.4%) or extremely (n = 186/248; 74.6%). respondents were also strongly motivated by previous experience with similar cases (n = 135/248; 54.5%) and a high profile or recent disease outbreak (n = 132/245; 53.9%). few respondents indicated any of the suggested barriers to ppe would have a strong influence as a deterrent to using ppe; safety concerns was most frequently cited, with 7.1% (n = 18) respondents stating this would be an extreme deterrent to using ppe. when combining both positive responses (extreme and a little influence), time constraints and safety concerns were the most frequently cited barriers, with 56.0% (n = 139/248) and 56.9% (n = 141/248) of respondents indicated these barriers would affect their decision not to use ppe, respectively. potential barriers that most respondents considered had no influence on their decision to use ppe were negative client perceptions and ppe availability, with 78.2% (n = 194/248) and 76.9% (n = 190/247) of respondents stating this, respectively. demographic variables that had significant associations with responses regarding motivators and barriers towards the use of ppe are illustrated in fig. 4 . the explanatory variables in the model were statistically significant, however they only explained a small amount of the variation in the respondents' perceptions of barriers (adjusted r-square 3.2%) and motivators (adjusted r-square 3.4%). respondents with previous experience of treating a case of zoonotic disease were less likely to regard time or financial constraints, or concern for adverse animal reactions as a deterrent to using ppe (fig. 4a) . veterinary surgeons were more likely than nurses to be deterred from using ppe because of concerns about negative client perceptions (fig. 4a) ; although positive client perceptions were marginally more likely to act as encouragement in both vets and nurses who reported themselves concerned about zoonotic risk in relation to clients (fig. 4b) . those working in large animal practice were more likely to be motivated to use ppe by concerns over liability and nurses tended to be more motivated than veterinary surgeons by sops and concern over the perceived risk to themselves. respondents were asked to state their level of agreement with 10 "attitude" statements (see fig. 5 for a description of the statements) reflecting different aspects of zoonotic disease risk control in the workplace. all respondents agreed that using ppe and practising good equipment hygiene was an effective way of reducing the risk of zoonotic disease transmission. the majority thought they had a high level of knowledge regarding zoonoses (n = 209/247; 84.6%) and that they were expected to demonstrate rigorous infection control practices (n = 229/247; 92.7%). however, 45 respondents (18.2%) stated they just hoped for the best when trying to avoid contracting a zoonotic disease and 37 (14.9%) were concerned their colleagues would think they were unnecessarily cautious if they used ppe in their workplace. responses to seven of these "attitude" statements tended to cluster together along the first pca axis (fig. 5 , statements a to g). cronbach's alpha coefficient for these statements was 0.76, suggesting an acceptable level of internal consistency and a potential underlying latent construct (interpreted here as a "positive attitude" towards ipcs) for these responses. statements h to k, whilst all contributing greater weight to pca axis 2, had an alpha coefficient of below 0.5 and were therefore evaluated individually. respondents' scores from the first principal component axis (fig. 5) were used as a proxy to represent this potential underlying "positive attitude" towards zoonotic disease risk reduction and a multivariable linear regression model was used to investigate potential explanatory factors. the only demographic variable that significantly altered model fit was profession, with veterinary surgeons tending to score lower than nurses in this "positive attitude". some of the factors identified as motivators and barriers also had a statistically significant association with the outcome. those who agreed that sops, positive client perceptions and risk to themselves motivated them to use ppe scored more highly; whereas those who regarded time constraints as a barrier to ppe use tended to have lower positive attitude scores (table 2) . there were 18.2% (n = 45/247) of respondents who agreed or strongly agreed with the statement, "i just hope for the best when it comes to trying to avoid contracting a zoonotic disease". a multivariable model suggested that respondents who had spent less time in practice tended to agree more with this "fatalistic" attitude, as did those who held the opinion that negative client perceptions deterred them from using ppe. furthermore, individuals with higher risk perception scores (i.e. who believed they tended to have a medium to high risk of exposure to zoonoses from clinical work) were more likely to agree that they "just hope for the best" (table 2) . a regression model was also constructed for the statement, "if i use ppe, others in my workplace think that i am being unnecessarily cautious". explanatory variables included an interaction between gender and profession; nurses, particularly male nurses, were more likely to agree, whereas there was no significant gender difference in veterinary surgeons. the aim of this research was to explore zoonotic disease risk perceptions within a cross-section of the veterinary profession in britain, and to identify barriers and motivators towards infection control practices and the use of ppe to minimise the risk of disease transmission. the large proportion of respondents (44.0%) who had contracted either a confirmed or suspected occupationallyacquired zoonotic infection highlights the level of occupational risk encountered by veterinary surgeons and veterinary nurses. a substantial proportion of respondents stated they were concerned about the risk of zoonoses (35%), and the majority thought the highest risk of transmission was through accidental injury, despite few reported zoonoses in the study being transmitted this way. this dissonance may be reflecting other occupational risks encountered by veterinary professionals, of which zoonotic diseases only represent a small proportion. data from studies conducted overseas suggests veterinary medicine is a high risk profession. in one survey of australian veterinary professionals, 71% reported at least one physical injury over a 10 year period (phillips et al., 2000) . in addition to practice-acquired injuries, such as dog and cat bites, scalpel blade cuts and lifting of heavy dogs, the risk of car accidents was also noted (phillips et al., 2000) . further research in the german veterinary profession highlighted workplace accidents as the most prevalent occupational hazard (87.7%), followed by commuting accidents (8.2%). occupationally-acquired zoonoses only represented 4.1% of the total hazards in the study (nienhaus et al., 2005) . practitioners are clearly working in a risky environment, particularly large animal vets, where farm environments are known to be inherently dangerous. a total of 7 fatal injuries and 292 major injuries were reported in british farmers or farmworkers in -2014 (hse, 2014 , and a recent survey by the british equine veterinary association revealed that on average, equine vets sustain seven to eight work-related injuries during a 30 year period (beva, 2014), highlighting just how hazardous these environments can be. few data are available on occupational injuries in the british veterinary profession; however, when working in what could be interpreted as a high-risk environment, a constant exposure to risk for those living or working in these types of environment may lead to habituation to, or normalisation of risk (clouser et al., 2015) . individuals in this study who tended to grade common clinical scenarios as posing a moderate to high risk of zoonosis exposure were also more likely to "just hope for the best", perhaps suggesting they have normalised these situations and do not perceive them as requiring additional precautions. within the veterinary environment, it is also possible that risks are rationalised; when faced with a very tangible risk of accident or injury, the more imperceptible risk of zoonotic infection becomes less important. this rationalisation of risk is also noted in the healthcare profession, where healthcare workers are more careful when handling sharps, compared with demonstrating compliance with ipc practices for infectious diseases (nicol et al., 2009) . the invisibility of the disease also plays a role here; the pathogens are not visible therefore the perception of the risk they pose is more abstract. in addition, there is often a time lapse between exposure to the pathogen and onset of clinical signs, making an association between suboptimal ipc behaviour and outcome difficult (cioffi and cioffi, 2015) . in the uk, personal risk receives little attention in the veterinary profession's media, especially when compared with issues such as mental health, with reports of high levels of psychological distress and suicide in the profession (bartram et al., 2010) and inclusion of issues around stress and mental wellbeing in surveys (vet futures, 2015) and veterinary curricula. this makes zoonotic disease risk less visible and may subject it to an availability heuristic, where the likelihood of an event is judged based on how easily an instance comes to mind (tversky and kahneman, 1974) . the absence of diseases such as rabies from the uk may also mean that veterinary professionals underestimate the risk of zoonoses because they consider the impacts to be relatively minor, short-term and treatable. this affect heuristic may be especially pronounced when decisions are made under time pressure (finucane et al., 2000) , perhaps reflected in this study's finding that those who viewed time constraints as a barrier to their use of ppe had less positive attitudes towards it. the disconnect between risk perception and health protective behaviour in the present study could be explained by perceived vulnerability. a risk might be acknowledged, yet if an individual does not feel vulnerable to this risk, there is no motivation or intention to change their behaviour. this perceived vulnerability is one of the factors considered in the protection motivation theory, where concern about a potential threat influences perception of the risk i.e. the more concerned an individual is about a disease, the higher risk they perceive it poses. if an individual feels vulnerable, this acts as a motivator for behaviour change (schemann et al., 2013 ). this behavioural model has been applied to horse owners following the equine influenza outbreak in australia where different levels of perceived vulnerability were identified in a cross section of the equine sector (schemann et al., 2013 (schemann et al., , 2011 . perceived vulnerability may be influencing health protective behaviour in the present study. it is possible that veterinary professionals, because they feel knowledgeable about zoonotic diseases, feel less vulnerable to the risks they pose. this lack of perceived vulnerability may account for the substantial proportion of respondents who stated they would not use ppe when handling clinically sick animals; perhaps because they are confident in their ability to identify those cases with potentially zoonotic or infectious aetiologies. identification of risk to self as a motivating factor was associated with a more "positive attitude" towards ppe use, but being a nurse was independently correlated with both of these variables. possibly because nurses often have less influence in decisions over diagnostics or handling of cases, they may feel more vulnerable. the protection motivation theory is only one of numerous health behaviour models that have been applied to both medical and veterinary research. these models are useful for explaining behaviour change in relation to infection control or biosecurity however they have had limited success in practice (pittet, 2004) . the main criticism of these models is that they make an assumption that behaviour is rational, controllable and therefore modifiable (cioffi and cioffi, 2015) . in reality, behaviour is affected by many external influences such as culture and society. society and culture are fluid, constantly changing concepts and consequently it makes incorporating them into behavioural models problematic. so while these models of behaviour are useful in explaining behaviour change to a certain extent, to gain a full understanding of what drives or inhibits behaviour change, social psychology and qualitative research is essential for making real impacts on practice. in the current study, individuals motivated by sops were found to have more positive attitudes towards ppe and also to report better compliance with ppe guidelines for medium-risk scenarios, such as dermatology cases and handling excreta. the "positive attitude" construct, related to self-efficacy, knowledge and confidence in equipment and practices, also clustered with a feeling that there is an expectation to demonstrate good practice. this could be a reflection of the influence of the practice culture on behaviour. in human healthcare, organisational factors, have been identified as one of the main drivers behind poor compliance with ipc practices (cumbler et al., 2013; de bono et al., 2014) . as compliance with infection control intersects individual behaviour and the cultural norms of the practice, the culture of veterinary practice will also be influencing behaviour surrounding infection control. it appears from the present study that when veterinary practices promote a culture of positive health behaviour and have high expectations of employees, this acts as a motivator for compliance with ipc practices. this highlights that behaviour change should also be implemented at an organisational level, rather than just focussing on individual behaviour. veterinary surgeons were more concerned than nurses that using ppe would be perceived negatively by clients. this attitude could be reflecting the importance of the vet-client relationship in veterinary practice. this is particularly relevant in farm animal practice, where vet-farmer relationships are often cultivated over extended time periods and each individual agricultural client represents a significant proportion of practices' income. respondents working in large animal practice were more likely to be motivated to use ppe by liability concerns, again potentially a reflection of the pressure felt by veterinary professionals from their clients. this is an interesting dichotomy, as the use of ppe not only protects the practitioner, but also the animal from zoonotic disease transmission. educating farm clients as to what infection control practices they should expect during clinical work on the farm may help mitigate concerns about negative client perceptions. choices around ppe use appear to be specific both to individuals and contexts, demonstrated by the low correlation between ppe scores in different clinical scenarios. this finding that protocols are often adapted to a specific situation has been observed previously in veterinary professionals (enticott, 2012) . the models that people construct to inform their behavioural decision making are highly individual and influenced by their biology and environment, but also their past experiences (kinderman, 2014) . in the present study, previous experiences of treating zoonotic cases were correlated with lower concern about potential barriers to ppe use. this may suggest that practical experience of dealing with zoonoses is more influential than the theoretical knowledge in negating negative attitudes to ppe use. a limitation of this study, as with any questionnaire based study, is that self-reported behaviours may not necessarily reflect actual practice. this discrepancy between reporting behaviours and actually performing them has been observed previously, particularly in relation to infection control practices and hand hygiene. one ukbased study highlighted no association between self-reported and observed hand-hygiene practices in a sample of healthcare professionals (jenner et al., 2006) , reflecting how self-reported behaviour should be interpreted with caution in any context. observation is considered the gold standard method of assessing behavioural practices, however is still subject to bias in the form of observer bias (racicot et al., 2012) and video recording has been used recently to monitor hand hygiene practices (boudjema et al., 2016) . these methods could also be effectively applied in a veterinary context and qualitative research methods, such as ethnography, would also provide valuable insights into the culture and practices of infection control and health protective behaviours in veterinary practice. the veterinary practices invited to take part in this study were randomly selected, using systematic random sampling, from the rcvs database. this system of using the rcvs database to sample the veterinary profession has been used previously for other research studies and is an established method of sampling this target population (nielsen et al., 2014) . the selection of practices was random, however the selection of participants at each practice may have been subject to selection bias. to facilitate a greater response rate, where data were available, individual respondents at each practice were selected from the rcvs register. to ensure this was consistent, the principal veterinary surgeon and head nurse were selected for each practice. using individual names may have increased the likelihood of the participant responding, however this may have introduced some selection bias as the selected participants are likely to be a more experienced professional. our results suggested that some workplace factors, such as sops and expectations of colleagues, influenced respondents' perceptions and attitudes to ppe use. these might be expected to cluster within practice; the response from a veterinary surgeon and nurse from the same practice might not be completely independent. however, it was not feasible to introduce practice as a random effect, as not enough practices returned two responses (22.2% returned responses from a veterinary nurse and veterinary surgeon from the same practice). as with any questionnaire-based research, this study will be subject to an element of responder bias, and the relatively low response rate of this study may accentuate this bias. this is particularly evident with male nurses, who are few in number, making them difficult to target using random selection methods. according to the latest rcvs annual report, male nurses represented just 2.1% of the total veterinary nurse population in the uk (rcvs, 2014), in the present study, 6% (95% ci 1.7-10.4) of respondents were male nurses. the rcvs database used to sample the veterinary population for this study does not contain information on specialism or type of practice, therefore it is not possible to assess whether this sample is representative of the wider veterinary profession. however, the demographic data on respondents are similar to data from the rcvs annual report; the mean age in our study was 42 years, compared with 41 years in the annual report. in addition, the gender split was similar; in our study, 61.1% (95% ci 55.1-67.1) of respondents were female and the rcvs reported 57.1% were female (rcvs, 2014) . despite similarities between the respondents and the veterinary population in the uk, the low response rate means the results from this sample may not necessarily be generalisable to the wider veterinary population, however this study is the first to provide these baseline data on attitudes and beliefs regarding zoonoses in the british veterinary population, which can be built on with future studies. the majority of respondents worked in small animal practice, which partly reflects the distribution of british practice types, but as the questionnaire was posted to the practice, this may have made it easier for small animal practitioners to respond as the majority of their time is spent within the practice premises. this means the study may be more representative of small animal veterinary professionals, rather than large and equine practice. to negate this in future studies, the use of stratified sampling would be a useful sampling method to ensure representative samples from each sector of the veterinary profession. this study aimed to investigate risk perceptions of zoonotic disease transmission in the veterinary profession in britain. the high infection rate within the profession suggests transmission of zoonotic infections from patient to clinician should be of concern. this study identified a few concepts that were reported to influence the use of ppe including a fatalistic attitude, the social environment and an individual's position within the practice. improving education provided to veterinary professionals may help improve compliance with sops and infection control practices to a certain extent, however this study has highlighted that increased knowledge does not necessarily equate to exhibiting riskmitigating behaviour. this suggests construction of risk is complex, circumstance-specific and can be influenced by a number of different internal and external factors. a qualitative study, using mixed qualitative methods including in-depth interviews and focus group discussions, to explore the construction of risk in the veterinary profession, is currently being developed to understand these concepts in more depth. survey reveals high risk of injury to equine vets a review of published reports regarding zoonotic pathogen infection in veterinarians interventions with potential to improve the mental health and wellbeing of uk veterinary surgeons numerical ecology with r journal of nursing & care hand hygiene analyzed by video recording challenging suboptimal infection control keeping workers safe: does provision of personal protective equipment match supervisor risk perceptions? risks of zoonoses in a veterinary service culture change in infection control mrsa in equine hospitals and its significance for infections in humans organizational culture and its implications for infection prevention and control in healthcare institutions zoonotic disease risk perceptions and infection control practices of australian veterinarians: call for change in work culture the local universality of veterinary expertise and the geography of animal disease occupational health hazards in veterinary medicine: zoonoses and other biological hazards psy: various procedures used in psychometry the affect heuristic in judgments of risks and benefits injury in australian veterinarians molecular epidemiology of methicillin-resistant staphylococcus aureus isolated from australian veterinarians a survey of zoonotic diseases contracted by south african veterinarians health and safety in agriculture in great britain methicillin-resistant staphylococcus aureus colonization in veterinary personnel a survey of the risk of zoonoses for veterinarians infection prevention as a show: a qualitative study of nurses' infection prevention behaviours discrepancy between self-reported and observed hand hygiene behaviour in healthcare professionals carriage of methicillin-resistant staphylococcus aureus by veterinarians in australia new laws of psychology: why nature and nurture alone can't explain human behaviour hand-hygiene practices and observed barriers in pediatric long-term care facilities in the new york metropolitan area adherence to surgical hand rubbing directives in a a survey of veterinarian involvement in zoonotic disease prevention practices significant injuries in australian veterinarians and use of safety precautions evaluation of specific infection control practices used by companion animal veterinarians in community veterinary practices in southern ontario hand hygiene practices of veterinary support staff in small animal private practice the power of vivid experience in hand hygiene compliance survey of the uk veterinary profession: common species and conditions nominated by veterinarians in practice work-related accidents and occupational diseases in veterinarians and their staff disease and injury among veterinarians the lowbury lecture: behaviour in infection control rcvs facts evaluation of the relationship between personality traits, experience, education and biosecurity compliance on poultry farms in québec. can horse owners' biosecurity practices following the first equine influenza outbreak in australia perceptions of vulnerability to a future outbreak: a study of horse managers affected by the first australian equine influenza outbreak occupational health risks in veterinary nursing: an exploratory study judgment under uncertainty: heuristics and biases report of the survey of the bva voice of the profession panel the role of education in the prevention and control of infection: a review of the literature occupational health and safety in small animal veterinary practice: part i -nonparasitic zoonotic diseases suspected transmission of methicillin-resistant staphylococcus aureus between domestic pets and humans in veterinary clinics and in the household infection control practices and zoonotic disease risks among veterinarians in the united states perceived threat, risk perception, and efficacy beliefs related to sars and other (emerging) infectious diseases: results of an international survey the authors gratefully acknowledge all participating veterinary nurses and veterinary surgeons, and dr j.l. ireland for her guidance and advice. this work was supported by the national institute for health research health protection research unit (nihr hpru) in emerging and zoonotic infections at university of liverpool in partnership with public health england (phe), in collaboration with liverpool school of tropical medicine. charlotte robin is based at the university of liverpool. the views expressed are those of the author(s) and not necessarily those of the nhs, the nihr, the department of health or public health england. no competing interests were declared. approval for this study was agreed by anglia ruskin university faculty of health, social care and education research ethics' panel. key: cord-016173-ro7nhody authors: louis, mariam; oyiengo, d. onentia; bourjeily, ghada title: pulmonary disorders in pregnancy date: 2014-08-13 journal: medical management of the pregnant patient doi: 10.1007/978-1-4614-1244-1_11 sha: doc_id: 16173 cord_uid: ro7nhody pregnancy is associated with some profound changes in the cardiovascular, respiratory, immune, and hematologic systems that impact the clinical presentation of respiratory disorders, their implications in pregnancy, and the decisions to treat. in addition, concerns for fetal well-being and safety of various interventions complicate the management of these disorders. in many circumstances, especially life-threatening ones, decisions are based upon a careful assessment of the risk benefit ratio rather than absolute safety of drugs and interventions. in this chapter, we review some of the common respiratory disorders that internists or obstetricians may be called upon to manage. asthma is the most common respiratory disease during pregnancy. asthma affects 4-8 % of pregnancies in the united states and up to 12 % in the united kingdom and australia. difference in prevalence around the world may be related to reporting methods, diagnostic methods, or possibly some environmental or genetic infl uences. pregnancy is a state of important physiological changes in the respiratory system. these physiological changes vary across the course of the pregnancy and are summarized in table 11 .1 . • 105-106 in fi rst trimester and 101-106 by third trimester paco 2 (mmhg) • 28-29 in fi rst trimester and 26-30 by third trimester ph • 7.43 hco 3 (meq/l) • 17-18 tlc total lung capacity, erv expiratory reserve volume, rv residual volume, frc functional residual capacity, vc vital capacity, ic inspiratory capacity, irv inspiratory reserve volume, fev1 forced expiratory volume in 1 s, fvc forced vital capacity, pao 2 partial arterial pressure of oxygen, paco 2 partial arterial pressure of carbon dioxide the course of asthma during pregnancy is variable. the majority of patients who improve in pregnancy tend to worsen in the postpartum period and vice versa [ 1 ] . in general, asthma improves toward the end of the pregnancy, including labor and delivery. however, the rate of asthma exacerbations is increased between gestational weeks 17 and 32 [ 1 , 2 ] . this may in part be due to medication noncompliance during the earlier part of the pregnancy upon discovery of the pregnancy but may also have to do with other pregnancy-related factors such as esophageal refl ux, nasal congestion, hormonal factors, and alterations in immunity that may result in increased susceptibility to infections. the major predictor of disease course is the severity of asthma prior to the pregnancy, but race and obesity may also play a role. african american and hispanic women are more likely to have asthma exacerbations. poor compliance with medications and diffi culties with access to medical services may be important confounders. additionally, obese women tend to have more severe asthma as both asthma and obesity share a common infl ammatory pathway at the cellular level. asthma also tends to behave in a similar fashion in subsequent pregnancies. while well-controlled asthma does not appear to have adverse consequences during pregnancy, poorly controlled asthma may negatively impact some maternal and fetal outcomes. in the largest study performed to date on over 37,000 women with asthma and over 280,000 controls, asthmatic women were more likely to have pregnancies complicated by miscarriage, antepartum and postpartum hemorrhage, anemia, and depression [ 3 ] . however, the risk of other negative outcomes such as gestational hypertensive disorders and stillbirths was not signifi cant in this study. in other large studies, a small, but statistically signifi cant risk of perinatal mortality, preeclampsia, and preterm deliveries have been reported [ 4 , 5 ] . a more recent retrospective cohort study performed in 12 clinical centers in the united states has shown increased risk of preeclampsia, gestational diabetes, and all preterm births [ 6 ] . secondary analysis of a recent randomized controlled trial showed that women with perception of good asthma control had a reduced risk of planned cesarean deliveries, asthma exacerbations, and preterm birth [ 7 ] . in the same study, women with increased anxiety had a higher risk of exacerbations. there is some evidence suggesting that poorly controlled asthma also confers an increased risk of small for gestational age, and low birth weight [ 8 ] . growth restriction may, however, be confounded by smoking. babies born to severe asthmatics are possibly more likely to have congenital anomalies [ 5 ] . the treatment of asthma involves assessment and management from preconception to the postpartum period. please refer to table 11 .3 and figure 11 .1 for a general overview of the classifi cation and management of chronic asthma. there are four general components of asthma care, irrespective of gestational age. these are (1) monitoring of respiratory status, (2) avoidance of possible triggers, (3) patient education, and (4) pharmacological treatment. patients should get a baseline spirometry and be instructed in how to follow their peak expiratory fl ow rate (pefr) at home. ideally, this should be done twice a day in patients with persistent disease. since pregnancy does not affect fl ow rates, reductions in these numbers usually indicate a worsening degree of airfl ow obstruction and should prompt quick medical evaluation. second, it is critical that patients avoid their known triggers to asthma including tobacco, dust, extreme temperatures, and allergens such as pollen and pet dander. third, patients need to be educated about their disease. pregnancy constitutes a perfect window to educate women given the multiple contacts with providers increased motivation due to concerns for fetal well-being. trigger control from washing bed sheets to vacuuming to rodent control are important strategies to review, especially since in most circumstances, women are more likely to be exposed to these triggers. important topics that need to be reviewed also include inhaler technique, early recognition of symptoms of worsening asthma, an action plan for acute asthma exacerbations, as well as an overview of how poorly controlled asthma can affect the pregnancy. patients should also be provided with the opportunity to express their concerns and ask questions. in a multi-institutional prospective study, lower forced expiratory volume in 1 s (fev1), but not asthma symptom frequency, was shown to be associated with adverse perinatal outcomes [ 9 ] . these data may be a refl ection of the effect of asthma severity or poor asthma control on perinatal outcomes and emphasize the possibility of discrepancies between symptom-based assessment and more objective measurement of lung function in pregnant women with asthma. finally, women with asthma need to receive the appropriate pharmacological treatment to achieve disease control. populationbased data do show that well-controlled asthmatics without exacerbations have better outcomes than women with exacerbations, but for obvious reasons, there are no randomized controlled trials evaluating this particular question. although most clinical practices use symptom-based, guideline-directed assessments to decide on medication use, recent data from a randomized controlled trial suggest lower rates of exacerbation, improved quality of life, and reduced neonatal hospitalization when management decisions were based on measurements of exhaled nitric oxide in pregnancy [ 10 ] . it is likely that this improvement in outcomes is due to improved control, rather than the method of assessment itself. table 11 .2 provides an overview of the asthma medications that are used in pregnancy. as in the nonpregnant population, the choice of pharmacological agent depends on disease severity. a frank discussion with the expectant mother and her partner should occur to encourage them to voice their concerns regarding asthma treatment in pregnancy. most women are told to stop their inhalers at the time of pregnancy diagnosis because of fda category listing. for that reason, a good amount of time should be spent on counseling about the use of asthma drugs in pregnancy. explaining to women that asthma control is key to the health of the pregnancy and their baby is an important part of counseling and may have to be done repeatedly during the course of pregnancy. in general, most asthma medications are justifi able in pregnancy, and some have adequate safety data. as noted in table 11 .2 , many of the drug choices are category c according to the fda classifi cation; however, these drugs are used routinely in the care of pregnant women with asthma. in addition, although leukotriene inhibitors are listed as category b, safety data are less reassuring than other drugs classifi ed as category c. omalizumab is classifi ed as category b by the fda despite the fact that all of the initial trials have excluded pregnant women. these safety data are based on animal studies which are limited by the fact that teratogenicity may be species specifi c. in addition, although prednisone may be associated with a small risk of cleft palate when administered in early pregnancy, the benefi t of this drug in an acute exacerbation of asthma by far outweighs the small risk of malformation. table 11 .3 reviews the classifi cation of asthma severity, which includes not only symptoms but also peak fl ow meter measurements. other coexisting diseases may worsen asthma and may have to be treated in order to achieve optimal control. the most common of these disorders are allergic rhinitis, gastroesophageal refl ux disease (gerd), sleep apnea, and psychiatric illnesses. allergic rhinitis occurs in 80-90 % of nonpregnant asthmatics and worsens asthma symptoms. management of the allergic rhinitis with drugs such as steroidal nasal sprays often improves asthma symptoms. women who are pregnant can also develop a different form of rhinitis, called rhinitis of pregnancy. this typically occurs in the latter part of pregnancy and resolves completely within 2 weeks after delivery. the prevalence of gerd among nonpregnant asthmatics varies between 30 and 90 %. in pregnant women with asthma, this number is likely higher given that gerd has been reported to be present in nearly 75 % of all pregnant women [ 11 ] . gerd can worsen bronchoconstriction via increased vagal tone, heightened bronchial reactivity, and microaspiration of gastric contents into the upper airway. patients who have symptoms of gerd benefi t from treatment. although proton pump inhibitors are not expected to increase the risk of congenital malformation in experimental animal studies and limited human pregnancy exposures, ranitidine constitutes a safer fi rst choice. finally, asthma and psychiatric comorbidities may coexist. stress and mental illness can worsen asthma in the pregnant women and may also complicate compliance. during labor, the general management of asthma is not signifi cantly different than above. most patients with asthma do not require a labor and delivery plan. however, patients with more severe disease or those who suffered an exacerbation close to term would require a detailed plan. stress dosing with steroids during labor can be considered in patients who have been on prolonged periods of systemic steroids during the pregnancy. patients with active symptoms or more severe asthma may benefi t from regional anesthesia. epidural anesthesia reduces minute volume and oxygen consumption and may help prevent hyperinfl ation in patients with active symptoms and reduce oxygen consumption. if general anesthesia is to be considered, then ketamine and halogenated anesthetics are preferred. it is safe to use oxytocin and prostaglandin e2. however, ergotamine and ergot derivatives, 15-methyl prostaglandin f2 alpha, morphine, and meperidine should be avoided in pregnant women with asthma as they may be associated with an increased risk of bronchospasm. an overview of the management of acute asthma exacerbations in the pregnant woman is detailed in fig. 11 .2 . more detailed information can be found in national heart lung and blood institute guidelines on asthma and pregnancy published in 2004. the treatment is similar to nonpregnant women with a few key differences that need to be highlighted. the fi rst is to remember that during pregnancy, the normal paco 2 is lower than in the nonpregnant state. therefore, a normal or high paco 2 heralds worsening respiratory failure and should be acted upon quickly. second, hypoxia during asthma exacerbations can lead to fetal distress and decelerations. therefore, immediate bronchodilators and supplemental oxygen should be administered. finally, it should be noted that while the indications for airway intubation are the same in the pregnant asthmatic as the nonpregnant asthmatic, intubation during pregnancy, especially in the third trimester, can be more diffi cult. this is due to increased airway edema, low frc and oxygen reserve, and a more profound response to sedatives from decreased venous return. hence, the most experienced member of the team should perform the intubation and be familiar with diffi cult airway management procedures. airway intubation is discussed in more detail in the critical care chap. 2 . pneumonia is one of the leading causes of non-obstetric maternal deaths in the united states [ 12 ] . there are several categories of pneumonia based on the likely spectrum of pathogens: community-acquired pneumonia (cap), healthcareassociated pneumonia, hospital-acquired pneumonia, and ventilator-associated pneumonia as well as pneumonia in the immune-compromised host. as pregnant women are usually young and healthy, cap predominates. the overall rate of cap in pregnant women is 0.5-1/1,000 pregnancies depending on the population being studied [ 13 -15 ] . the risk of pneumonia is notably increased in gravidas with comorbid conditions such as asthma, anemia, and human immunodefi ciency virus [ 16 ] . tobacco and substance abuse have also been independently associated with an increased risk for pneumonia. infl uenza increases the risk for development of bacterial pneumonia by denuding the respiratory epithelium and predisposing the host to infection. in adults, the causative agents for cap are identifi ed in 40-60 % of cases when advanced testing techniques are utilized [ 17 , 18 ] . the yield is much lower, in the range of 10-25 %, with regular testing. though specifi c studies in pregnant women are lacking, the likely pathogens are not considered to be signifi cantly different • less severe symptoms • pefr between 50-80% predicted [ 19 ] . pregnant women may be more likely to contract viral infections and tend to have more severe disease than the nonpregnant population. therefore, the estimates above may be somewhat different in pregnancy. gingival hyperplasia in pregnancy may promote changes in oral fl ora and promote growth of anaerobic bacteria. aspiration risk and heartburn [ 11 ] may be increased in pregnancy, especially when undergoing sedative procedures or general anesthesia. whether these changes and increased gastroesophageal refl ux disorders are associated with increased risk of pneumonia is not clear. immune alterations in pregnancy that promote maternal tolerance to the fetus may impair optimal function of host defense mechanisms and increase the risk of infections. pregnant women have decreased lung capacity and decreased erv and rv resulting in a reduction in functional residual capacity. a state of compensated respiratory alkalosis is established by increasing minute ventilation. this is largely secondary to an increase in tidal volume and to a lesser extent an increase in respiratory rate. healthy gravid subjects have increased cardiac output and decreased oncotic pressure which peaks in the third trimester that promotes transudation of fl uid into the pulmonary interstitium. these changes diminish oxygen reserve, increase the risk of development of pulmonary edema with fl uid resuscitation, and predispose to respiratory failure and predispose women to more severe disease. pneumonia may be complicated by hypoxia, respiratory failure, or death, and preterm delivery appears to be the most common obstetric complication associated with maternal pneumonia. while intrauterine infection is known to cause preterm delivery, a causal relationship between pneumonia in pregnancy and preterm delivery is not well established. it is possible that higher levels of cytokines and other mediators such as tnf-α and prostaglandin f2 reported in bacterial infections may lead to preterm delivery and low birth weight. other reported complications include placental abruption, preeclampsia and eclampsia, and low apgar scores [ 20 -22 ] . it is unclear, however, whether these complications are related to the actual infection or to other host factors. common causes for respiratory distress in pregnancy include infection such as urinary tract infection, pulmonary edema, asthma, aspiration, and pulmonary embolus. the clinical spectra of pneumonia caused by different pathogens overlap considerably. thorough history and examination along with microscopic examination of respiratory secretions may narrow the differential diagnosis and identify the offending pathogen. urine pneumococcal and legionella antigen may also aid in guiding antibiotic therapy and should be considered for patients requiring admission. during infl uenza seasons, respiratory viral panel should be sent. though blood cultures are usually negative and of low yield, they may add value in the patient requiring admission to the intensive care unit (icu). arterial blood gas should be done for all patients with hypoxia or those requiring admission to the icu and interpreted according to pregnant status. chest x-ray should be performed in patients suspected of having pneumonia and helps confi rm the diagnosis or show evidence of a complicated pneumonia such as lung abscess or pleural effusion. computed tomography scan is unlikely to add value in the management of pneumonia, unless empyema is suspected. ultrasound guidance likely reduces the risk of complications with thoracentesis in pregnancy given the cranial displacement of the diaphragm in pregnancy. bronchoscopy though rarely needed can be performed safely in pregnancy and should not be withheld when indicated. general supportive measures are similar in patients with various types of pneumonia. for patients with a viable fetus who require admission, the obstetric team should be consulted for fetal monitoring as well as timing of delivery in the event of fetal distress. hypoxia, acidosis, and fever should not be tolerated as they are independently associated with poor fetal outcomes. oxygen should be supplemented for goal saturations > 95 % or pao 2 above 70. fever should be treated aggressively for a goal temp of less than 38 °c. in cases of severe pneumonia associated with respiratory failure, early intubation should be considered. intubations in pregnancy have a higher failure rate than the general surgical population (see chap. 2 on airway intubation ). attempts to maintain co 2 within an acceptable range may be challenging in the event of acute respiratory distress syndrome (ards) and the use of lung protective strategies. low tidal volume ventilation strategy with a target tidal volume of 6 ml/kg is recommended for ards [ 23 ] . though pregnant women were excluded in the acute respiratory distress network studies on lung protective strategies, low tidal volume ventilation should be attempted, initially with a higher respiratory rate to maintain ventilation given the survival benefi t observed in the nonpregnant population. however, higher tidal volumes may be required to correct acidosis that may compromise the fetus, in such instances attempts should still be made to keep the plateau pressure below 30 cm of water as barotrauma is thought to contribute signifi cantly to lung injury. paco 2 levels need to be watched closely, and given the 10 mmhg gradient between fetal and maternal, maternal paco 2 should be kept at 55 mmhg or lower. use of bicarbonate to correct the ph has been suggested in the nonpregnant population though clinical studies to support this approach are limited. it is thought that the transfer of bicarbonate across the placenta is slow and may not be adequate to correct fetal acidosis. while the decision to admit patients to the icu is complex and should be individualized, clinicians should have a lower threshold when evaluating pregnant mothers. antibiotic therapy should be initiated empirically while awaiting confi rmatory tests that may aid in narrowing the antimicrobial coverage. in infl uenza season, antiviral (usually oseltamivir) should be started empirically as well. decisions about antibiotic choice should address the most likely pathogen, adverse effect on the mother, and should also weigh the risk of the specifi c drug to the fetus against the risk of inappropriately treated disease. an optimal drug would be one with maximal efficacy against the known pathogen and no risk to the fetus. however, such drugs are scarce, and in most circumstances, a drug with more benefi t than risk can be selected. other than concern for fetal safety, preferred antibiotics are not different from those in nonpregnant women, but dosing should take into account increased hepatic and renal clearance and increased volume of distribution. there is a theoretical concern that aminoglycosides and vancomycin may be associated with hearing and kidney dysfunction in the offspring, but this possibility has not been confi rmed clinically. penicillins, clindamycins, and most macrolides except clarithromycin have a good safety profi le. fluoroquinolones are usually avoided in pregnancy due to a theoretical risk of arthropathy in the offspring. however, some experts argue that this issue is not clinically signifi cant in humans. tetracyclines should be avoided as they may cause permanent dental discoloration. varicella (chicken pox) is caused by varicella zoster virus (vzv). varicella is predominantly a childhood illness that is usually self-limited and rarely results in severe disease. in adults, however, it is much more likely to be severe. vzv is not only likely to have increased morbidity and mortality in pregnancy but may also be associated with congenital abnormalities and poor fetal outcomes. varicella pneumonia is among the most severe maternal complication of vzv infection [ 24 -27 ] . viral particles are shed from varicella-associated vesicles and get airborne. inhalation or contact with the conjunctiva results into contraction of the infection with entry of the virus through the respiratory mucosa. crusting over of the last crop of vesicles usually marks the end of the contagious period. patients are known to be infectious 2-3 days prior to development of the vesicular rash; for this reason, an alternative viral shedding site such as the respiratory tract is believed to exist [ 28 ] . varicella is highly contagious with seasonal variation in incidence, being most prevalent in the winter and spring. it has a very high clinical attack rate of 65-86 % following exposure to susceptible individuals [ 29 ] . following a primary infection with varicella, lifelong immunity is usually established in the majority of subjects; in a few people, however, second attacks of varicella may occur [ 30 ] . while varicella follows a benign course in children, adults have up to 25 times increased risk of severe disease [ 31 ] . pregnant women are at a uniquely increased risk for infection. in the united states, the incidence of primary varicella averages 0.7-3 cases/1,000 pregnancies. varicella pneumonia complicates 10-20 % of all cases, and 40 % of mothers with pneumonia require mechanical ventilation [ 32 , 33 ] . maternal mortality from varicella pneumonia used to be high at 20-45 % before the introduction of antiviral therapy but is currently estimated at less than 3-14 % [ 34 , 35 ] . changes in physiology and immunity associated with pregnancy may increase the risk of infection and severe outcomes in the pregnant women. in an effort to promote maternal tolerance to fetal antigens, pregnancy is associated with a shift from th1 to th2 lymphocyte responses and associated cytokines at the maternal fetal interface. macrophage and lymphocyte-secreted th2 cytokines stimulate b lymphocytes promoting a humoral response while suppressing cytotoxic lymphocytes. while pregnancy may not necessarily be an immune-suppressed state in the real sense, immunity against vzv infection is primarily cell mediated, and a systemic shift away from cell-mediated immunity may increase susceptibility to intracellular viral pathogens, parasites, and bacteria. primary varicella (chicken pox) is associated with several adverse effects in pregnancy such as preterm delivery and low birth weight. in one study involving 106 pregnant women with varicella compared to a similar number of noninfected controls, 14.3 % of pregnant women with chicken pox had a preterm delivery as compared to 5.6 % of controls [ 36 ] . low birth weight and intrauterine growth restriction have been described. nearly 1-2 % of cases of maternal primary vzv infection result in congenital varicella syndrome (cvs), which is associated with a mortality of up to 30 % in the fi rst few months of life and severe disability in survivors. primary vzv infection prior to the 20th week of pregnancy is associated with the highest risk for cvs [ 24 , 36 ] . clinical features of cvs include skin lesions in a dermatomal distribution that may lead to eventual scarring in up to 70 % of cases, muscle and limb hypoplasia in up to 72 % of cases, chorioretinitis and cataracts in up to 52 % of cases, and abnormalities of gastrointestinal, genitourinary, and cardiovascular system in 7-24 % of cases [ 37 , 38 ] . neurological abnormalities such as mental retardation, microcephaly, and hydrocephalus occur in 48-62 % of cases resulting in learning diffi culties and developmental delays [ 39 ] . the pathobiology of cvs is thought to be in utero reactivation similar to that of herpes zoster with a shortened latency period that is likely due to immature fetal cell-mediated immunity. while up to 25 % of babies born to mothers with primary vzv infection have serologic evidence of infection, there is no serologic evidence of infection in babies born to mothers with herpes zoster. similarly, infants do not appear to be at risk of infection if maternal zoster occurs near delivery [ 40 ] . unless disseminated, herpes zoster is thus not associated with a signifi cant increase in adverse fetal outcomes [ 37 , 41 ] . peripartum varicella infection places the infant at risk for neonatal varicella, which is associated with mortality rate as high as 20 %. following a 2-to 3-week incubation period, fever, headache, malaise, anorexia, and other constitutional symptoms precede the occurrence of the rash by 2-3 days. the rash is typically vesicular, generalized, and intensely pruritic. varicella pneumonia can develop anywhere from day 1 to day 6 after the onset of the rash. late onset of respiratory symptoms with recurrence of fevers is suggestive of bacterial coinfection rather than primary viral pneumonia. skin superinfection with staphylococcal bacteremia and neurological involvement with encephalitis may occur. a thorough history and skin exam may strongly suggest the diagnosis of varicella. chest radiograph pattern in varicella pneumonia is nonspecifi c and may be normal or show unilateral or patchy areas of consolidation or nodular opacities. ct fi ndings include multicentric hemorrhage and necrosis centered around the airways and small nodular opacities surrounded by ground glass which may coalesce to form consolidations. healed and calcifi ed pulmonary nodules may persist [ 42 ] . skin lesion (rather than bronchoscopic) sampling offers a high yield and should be attempted fi rst. the base of newly erupted vesicles has the highest yield and should be sampled. specimens can then be sent for viral culture, polymerase chain reaction (pcr), and immunofl uorescence (dfa). direct fl uorescent antibody test is rapidly available in most institutions. though bronchoscopy in most cases is not necessary, varicella may be recovered from bronchial washings by viral pcr and viral culture techniques. pregnant women suspected of having varicella should be admitted for initiation of antivirals and other supportive treatment. chest imaging should be performed on admission to evaluate for pulmonary involvement. antiviral therapy is associated with a reduction in the duration of symptoms when initiated within the fi rst 24 h of onset of the varicella rash. due to the high risk of varicella pneumonia in pregnancy, empiric antiviral therapy should be initiated while awaiting confi rmatory results. acyclovir or valacyclovir are the antivirals of choice. oral acyclovir has low bioavailability that requires it to be administered in frequent doses to achieve therapeutic levels. valacyclovir has high oral bioavailability and less frequent dosing intervals and is an alternative oral formulation. there is however less experience with valacyclovir compared to acyclovir. presence of pulmonary symptoms should prompt admission to the icu and initiation of intravenous acyclovir which has a guaranteed and higher bioavailability. antiviral therapy is associated with significantly less morbidity and mortality when initiated prior to 72 h. late presentation with varicella pneumonia should not obviate the initiation of antiviral therapy. a dose of 10-15 mg/kg intravenously every 8 h for 5-10 days is recommended for vzv pneumonia. pulmonary bacterial superinfection may occur. studies characterizing bacterial pathogens likely to cause superinfection are lacking. thus, empiric broad-spectrum antibiotic coverage should be initiated in pregnant women with pneumonia. despite acyclovir crossing the placenta in signifi cant amounts, there appears to be no reduction in congenital varicella syndrome with treatment. the neonate should be isolated from the mother in the peripartum period until the mother is deemed noncontagious. consultation with high-risk obstetrics and neonatology would be useful given the risk of preterm labor and growth restriction. immunity to varicella consists of both vzv-specifi c neutralizing antibodies and cell-mediated immunity. immunity against vzv can be assessed by the use of antibody serologic assays. though there are no adequate controlled trials examining the effectiveness of vzig prophylaxis, vzig is associated with more than 40-50 % reduction in risk of contracting varicella and a signifi cant reduction in risk of severe disease [ 40 ] . vzv can be prevented by vaccination. vzv vaccine is a live attenuated vaccine and is generally not recommended in pregnancy and in immune-suppressed individuals. varicella can be contracted from herpes zoster lesions as well. family members with such lesions should minimize contact and cover their lesions to decrease the risk of transmission. healthcare workers who deal with pregnant women should be screened and vaccinated, and similarly pregnant healthcare workers should avoid contact or exposure to patients with varicella. infection with infl uenza virus can result in an acute respiratory illness of varying severity. the majority of healthy individuals infected with infl uenza is asymptomatic or has minimal symptoms. however, adults with comorbidities, elderly subjects, and healthy pregnant women are at increased risk of severe disease and death. in addition, infl uenza infection during pregnancy increases the risk of adverse fetal outcomes. in a regular endemic season, infl uenza is estimated to result in 200,000 hospitalizations and 36,000 deaths in the united states. pregnant women are at increased risk for morbidity (including cardiorespiratory complications) and mortality from infl uenza compared with nonpregnant controls [ 43 -46 ] that is more pronounced in the second and third trimester of pregnancy [ 47 ] . in 2010, the pandemic h1n1 infl uenza in pregnancy working group reported on 788 pregnant women in the united states with 2009 infl uenza a(h1n1). among those, 30 died (5 % of all reported 2009 infl uenza a (h1n1) infl uenza deaths in this period). most hospitalizations and deaths occurred in the third trimester [ 47 ] . pregnant women with comorbidities or those who smoke have an increased risk for severe disease requiring hospital admission compared to those without comorbidities [ 48 , 49 ] . as discussed above, these physiological changes make pregnant women more susceptible to acquiring viral infections and subsequent development of severe disease. apart from direct effects to the mother, infl uenza has been associated with undesirable effects to the fetus. risks of adverse fetal outcomes vary with the severity of maternal disease. preterm delivery appears to be the most common and consistent complication associated with infl uenza pandemics. in the pandemic of 1918 and 1957, higher rates of pregnancy loss, premature delivery, preterm deliveries, as well as other adverse effects were reported. in several reports during the pandemic infl uenza of 2009 among pregnant women requiring admission, preterm delivery was close to 30 % and was even higher among mothers who were admitted to the icu [ 48 , 50 , 51 ] . several other adverse fetal outcomes of maternal infl uenza have been reported especially during pandemics, including abortion, fetal distress, and placenta abruption [ 50 , 52 ] . symptoms of infl uenza in pregnancy are similar to symptoms outside of pregnancy. infl uenza virus-mediated leukopenia may make the host more susceptible to bacterial infections. secondary bacterial pneumonia is characterized by the appearance of a new fever and productive cough during early convalescence. radiologic fi ndings are generally similar to other viral pneumonias, and more extensive fi ndings are associated with more severe complications. tree in bud opacities may also be seen. laboratory fi ndings may include an elevated or low white count, lymphopenia, and hyponatremia. myoglobinuria and renal failure can occur rarely. cardiac muscle damage with associated electrocardiographic changes, disturbances of rhythm, and high levels of cardiac enzymes have been reported after infl uenza virus infection. sputum cultures may be revealing in the event of bacterial superinfection. streptococcus pneumoniae , staphylococcus aureus , haemophilus infl uenzae , and group a hemolytic streptococci are the bacterial pathogens most commonly isolated in adults with infl uenza. a defi nitive diagnosis of infl uenza requires laboratory confi rmation. diagnostic tests for infl uenza fall into four broad categories: virus isolation [culture], detection of viral proteins, detection of viral nucleic acid, and serological diagnosis. detection of viral nucleic acid allows for typing and subtyping of the specifi c virus strain. treatment of infl uenza consists of supportive management and specifi c antiviral therapy. optimizing supportive treatment is central to the management of infl uenza and probably of more benefi t than specifi c antiviral therapy. supportive therapy is similar to other types of pneumonia as discussed above. as with most drugs, information about safety and effectiveness of anti-infl uenza drugs during pregnancy is scarce. in view of potential severe maternal disease from infl uenza and adverse fetal outcomes, benefi ts of treatment with antivirals likely outweigh the potential risks to the fetus. there are two classes of antiviral drugs currently in general clinical use: adamantanes, (examples of which include amantadine and rimantadine) and neuraminidase inhibitors such as oseltamivir, zanamivir, and peramivir . adamantanes are active against infl uenza a only, increase infl uenza a resistance to adamantanes, and are associated with embryotoxicity in animal studies. as such they are not recommended in pregnancy. neuraminidase inhibitors are active against infl uenza a and b viruses. they are preferred in all adults and in pregnancy. though studies in pregnancy are inadequate, extensive use of oseltamivir in pregnancy during the 2009 hin1 pandemic was not associated with adverse effects specifi c to the drug. neuraminidase inhibitors reduce the duration and severity of symptoms and duration of viral shedding when initiated within 48 h of symptom onset [ 43 , 48 , 53 , 54 ] . there is also evidence to support reduction in complication rate, duration of hospitalization, and mortality in adults. observational studies published during the 2009 pandemic demonstrated that, among pregnant women hospitalized with pandemic h1n1 infection, treatment with oseltamivir was associated with fewer intensive care unit admissions, less use of mechanical ventilation, and decreased mortality [ 43 , 48 ] . empiric treatment should always be initiated in the gravid woman when infl uenza is suspected while awaiting confi rmatory results as delay in initiation of treatment is associated with an increased risk of severe outcomes, icu admission, and death [ 48 , 49 , 55 ] . pregnant mothers presenting after 48 h of symptom onset should still be initiated on therapy as there is evidence of benefi t even when initiated after 2 days of symptom onset. initiation of antiviral therapy within the fi rst 48 h is associated with the most benefi t [ 43 , 48 , 49 , 53 , 56 ] . there is less experience with zanamivir which is administered by inhalation route. zanamivir is also contraindicated in patients with asthma as it has a potential of worsening respiratory symptoms [ 57 ] . for patients requiring admission to icu for infl uenza pneumonia or in cases of suspected secondary bacterial infection, empiric antibiotic therapy should be initiated. sputum culture may be helpful in the case of isolation of resistant bacteria that may warrant changes or broadening of antibiotic coverage. in pregnant women, infl uenza vaccination induces an antibody response similar to that in nonpregnant women. cdc and who recommend pregnant women or women who will be pregnant during the winter or peak infl uenza season to be prioritized for vaccination. in addition to protection to the mothers, infl uenza vaccination may offer protection to the neonate as well as contribute to herd immunity in other family members. pregnant mothers who have not been vaccinated or those with comorbidities such as asthma who have been exposed to infl uenza may benefi t from antiviral prophylaxis. oseltamivir is preferred for prophylaxis due to its ease of administration. sleep-disordered breathing (sdb) is a spectrum of disorders that encompasses snoring and upper airway resistance, obstructive sleep apnea (osa), and other disorders. osa is a disorder characterized by periodic and recurrent collapse of the upper airway during sleep. obesity, age, and upper airway and facial abnormalities are the most recognized risk factors for the disorder. osa is prevalent in patients with chronic hypertension, cardiovascular disease, and metabolic disorders such as diabetes mellitus. the pregnant population appears to be at risk for the disorder given anatomic upper airway changes that occur in pregnancy as well as physiological changes and hormones. snoring occurs in close to 35 % of pregnant women [ 58 ] . the prevalence of osa in pregnancy is not well known, but preliminary data suggest that close to 60 % of loud snorers in pregnancy have at least mild osa. the natural history of snoring around pregnancy is, however, unclear. there are some data suggesting that osa actually improves in untreated postpartum women around 3 months after delivery. data on osa predating pregnancy is missing and pregestational and gestational osa may have different clinical consequences. there is a signifi cant lack in screening for the disorder by obstetric providers according to a recent study, even in obese patients [ 59 ] . notably, the berlin questionnaire, a widely used screening tool in the nonpregnant population, appears to have poor positive and negative predictive values in pregnancy [ 60 ] . snoring and excessive daytime sleepiness may be important predictors [ 61 ] . chronic hypertension, age, obesity, and snoring appear to have a good predictive value for osa in high-risk populations [ 62 ] . further validation of this potential predictive model in different pregnant populations is needed. snoring and osa have been shown to be associated with a variety of adverse pregnancy outcomes including gestational hypertension, gestational diabetes, and cesarean deliveries. gestational hypertension is the most studied link with numerous studies on snoring as well as osa showing a two-to threefold increased risk of gestational hypertension in snorers, even after adjusting for confounders such as body mass index [ 63 ] . mechanistic studies are lacking and the directionality of the association not well clarifi ed, but it is possible that intermittent hypoxia, fl ow limitation, poor sleep, and arousals may play a role in causing endothelial dysfunction, infl ammation, and hypercoagulability that are common to the two disorders. a few studies to date have also shown worse abnormalities in glucose metabolism and a higher prevalence of gestational diabetes in women complaining of loud snoring and poor sleep [ 58 , 64 ] . gestational diabetes has been associated with a fi vefold increase in the risk of type ii diabetes at 5 years and a ninefold risk at 9 years [ 65 ] . snoring, poor sleep, and osa have all been associated with a higher risk of unplanned cesarean deliveries. this association may be harder to explain and may depend on the actual reason leading to unplanned cesarean delivery such as obstetric, fetal, or medical causes. the impact of sdb on fetal and neonatal outcomes has also been studied, but the results of such studies have been more confl icting. growth restriction has been reported to be associated with snoring in some studies but not in others. the effect on apgar scores also appears to be controversial. there are some case reports and case series suggesting fetal decelerations secondary to sleep apnea, but a recent study evaluating synchronized limited sleep studies and fetal monitors have failed to show a signifi cantly higher prevalence of late decelerations [ 60 ] . once diagnosed, treatment of osa is approved in patients with an apnea hypopnea index ahi >15 or those with ahi >5 who have symptoms that are known to respond to therapy such as daytime sleepiness. there are no specifi c guidelines on therapy initiation in pregnancy yet for various reasons. as stated above, the natural history of the disorder around the perinatal period is not well known. thus, it is possible that, with weight loss and reversal of pregnancy physiology, the disorder may resolve or at least improve in the postpartum period. in addition, there have been no trials to date that have shown that treatment of osa in pregnancy would improve pregnancy or fetal outcomes. this reason likely contributes to the fact that the disorder remains underscreened and underdiagnosed [ 59 ] . based on current data, weight loss is unlikely to be an option in pregnancy because of concern that it may affect the nutritional status of the mother and therefore fetal well-being. alcohol and cigarette smoking avoidance is another therapeutic strategy in pregnancy that carries additional pregnancy-specifi c benefi ts. outside of pregnancy, cpap therapy has been shown to improve quality of life and daytime sleepiness with some data suggesting improvement in cardiovascular outcomes such as hypertension. it is likely that these effects of cpap are also true in pregnancy. observational studies have shown improvement in daytime fatigue and daytime somnolence in pregnant women with osa treated with cpap and re-titrated around midpregnancy [ 66 ] . in women with preeclampsia, small, randomized trials have shown that in-laboratory positive airway pressure therapy improves hemodynamics, uric acid, and cardiac output compared to untreated women [ 67 , 68 ] . until future studies of cpap therapy are available in pregnancy, indications for therapy are likely the same as in the nonpregnant population. we are awaiting trials evaluating the effect of pap therapy on pregnancy-specifi c outcomes to be able to determine the "urgency" of starting pap therapy in pregnancy. the type of pap therapy that is most benefi cial in pregnancy is unknown. however, auto-titrating pap therapy has the advantage of avoiding repeat re-titration of pressure requirements. in summary, pregnant women with the above disorders need to be managed with pregnancy physiology and fetal effects of the disease and the therapy in mind. the course of asthma during pregnancy, post partum, and with successive pregnancies: a prospective analysis acute asthma during pregnancy a comprehensive analysis of adverse obstetric and pediatric complications in women with asthma asthma during pregnancy-a population based study infant and maternal outcomes in the pregnancies of asthmatic women obstetric complications among us women with asthma psychosocial variables are related to future exacerbation risk and perinatal outcomes in pregnant women with asthma effects of asthma severity, exacerbations and oral corticosteroids on perinatal outcomes spirometry is related to perinatal outcomes in pregnant women with asthma management of asthma in pregnancy guided by measurement of fraction of exhaled nitric oxide: a double-blind, randomised controlled trial predictors of gastroesophageal refl ux symptoms in pregnant women screened for sleep disordered breathing: a secondary analysis causes of maternal mortality in the united states pneumonia during pregnancy an appraisal of treatment guidelines for antepartum community-acquired pneumonia epidemiology of community-acquired pneumonia in edmonton, alberta: an emergency department-based study pneumonia as a complication of pregnancy microbial aetiology of community-acquired pneumonia and its relation to severity etiology of communityacquired pneumonia: increased microbiological yield with new diagnostic methods pneumonia in pregnancy pneumonia and pregnancy outcomes: a nationwide population-based study pneumonia during pregnancy: radiological characteristics, predisposing factors and pregnancy outcomes acute and chronic respiratory diseases in pregnancy: associations with placental abruption the acute respiratory distress syndrome network. ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome varicella-zoster virus (chickenpox) infection in pregnancy varicella pneumonia in adults consequences of varicella and herpes zoster in pregnancy: prospective study of 1739 cases modifi cation of chicken pox in family contacts by administration of gamma globulin second varicella infections: are they more common than previously thought? epidemiology of herpes zoster in children and adolescents: a population-based study varicella-related deaths among adult--nited states managing varicella zoster infection in pregnancy treatment with acyclovir of varicella pneumonia in pregnancy use of acyclovir for varicella pneumonia during pregnancy outcome after maternal varicella infection in the fi rst 20 weeks of pregnancy varicella and herpes zoster in pregnancy and the newborn neurodevelopmental follow-up of children of women infected with varicella during pregnancy: a prospective study congenital varicella syndrome: the evidence for secondary prevention with varicella-zoster immune globulin intrauterine infection with varicella-zoster virus after maternal varicella high-resolution ct fi ndings of varicella-zoster pneumonia risk factors for severe illness with 2009 pandemic infl uenza a (h1n1) virus infection in china pandemic infl uenza a (h1n1) virus infection in postpartum women in california maternal morbidity and perinatal outcomes among pregnant women with respiratory hospitalizations during infl uenza season deaths from asian infl uenza associated with pregnancy pandemic 2009 infl uenza a(h1n1) virus illness among pregnant women in the united states infl uenza virus infection during pregnancy in the usa pandemic infl uenza a (h1n1) in pregnancy: a systematic review of the literature pandemic 2009 infl uenza a (h1n1) in 71 critically ill pregnant women in california infl uenza a/h1n1v in pregnancy: an investigation of the characteristics and management of affected women and the relationship to pregnancy outcomes for mother and infant novel infl uenza a(h1n1) virus among gravid admissions california pandemic working g. severe 2009 h1n1 infl uenza in pregnant and postpartum women in california antiviral agents for the treatment and chemoprophylaxis of infl uenza --recommendations of the advisory committee on immunization practices (acip) severe, critical and fatal cases of 2009 h1n1 infl uenza in china severity of 2009 pandemic infl uenza a (h1n1) virus infection in pregnant women product information: relenza(r) oral inhalation powder, zanamivir oral inhalation powder. glaxosmithkline (per fda) pregnancy and fetal outcomes of symptoms of sleep-disordered breathing patient and provider perceptions of sleep disordered breathing assessment during prenatal care: a survey-based observational study prospective trial on obstructive sleep apnea in pregnancy and fetal heart rate monitoring excessive daytime sleepiness in late pregnancy may not always be normal: results from a cross sectional study development of a pregnancy-specifi c screening tool for sleep apnea sleep-disordered breathing in pregnancy glucose intolerance and gestational diabetes risk in relation to sleep duration and snoring during pregnancy: a pilot study type 2 diabetes mellitus after gestational diabetes: a systematic review and meta-analysis pregnancy, sleep disordered breathing and treatment with nasal continuous positive airway pressure reduced nocturnal cardiac output associated with preeclampsia is minimized with the use of nocturnal nasal cpap nasal continuous positive airway pressure reduces sleep-induced blood pressure increments in preeclampsia key: cord-016704-99v4brjf authors: nicholson, felicity title: infectious diseases: the role of the forensic physician date: 2005 journal: clinical forensic medicine doi: 10.1385/1-59259-913-3:235 sha: doc_id: 16704 cord_uid: 99v4brjf infections have plagued doctors for centuries, in both the diagnosis of the specific diseases and the identification and subsequent management of the causative agents. there is a constant need for information as new organisms emerge, existing ones develop resistance to current drugs or vaccines, and changes in epidemiology and prevalence occur. in the 21st century, obtaining this information has never been more important. population migration and the relatively low cost of flying means that unfamiliar infectious diseases may be brought into industrialized countries. an example of this was an outbreak of severe acute respiratory syndrome (sars), which was first recognized in 2003. despite modern technology and a huge input of money, it took months for the agent to be identified, a diagnostic test to be produced, and a strategy for disease reporting and isolation to be established. there is no doubt that other new and fascinating diseases will continue to emerge. infections have plagued doctors for centuries, in both the diagnosis of the specific diseases and the identification and subsequent management of the causative agents. there is a constant need for information as new organisms emerge, existing ones develop resistance to current drugs or vaccines, and changes in epidemiology and prevalence occur. in the 21st century, obtaining this information has never been more important. population migration and the relatively low cost of flying means that unfamiliar infectious diseases may be brought into industrialized countries. an example of this was an outbreak of severe acute respiratory syndrome (sars), which was first recognized in 2003. despite modern technology and a huge input of money, it took months for the agent to be identified, a diagnostic test to be produced, and a strategy for disease reporting and isolation to be established. there is no doubt that other new and fascinating diseases will continue to emerge. for the forensic physician, dealing with infections presents two main problems. the first problem is managing detainees or police personnel who have contracted a disease and may be infectious or unwell. the second problem is handling assault victims, including police officers, who have potentially been exposed to an infectious disease. the latter can be distressing for those involved, compounded, in part, from an inconsistency of management guidelines, if indeed they exist. with the advent of human rights legislation, increasing pressure is being placed on doctors regarding consent and confidentiality of the detainee. therefore, it is prudent to preempt such situations before the consultation begins by obtaining either written or verbal consent from the detainee to allow certain pieces of information to be disclosed. if the detainee does not agree, then the doctor must decide whether withholding relevant details will endanger the lives or health of those working within custody or others with whom they may have had close contact (whether or not deliberate). consent and confidentiality issues are discussed in detail in chapter 2. adopting a universal approach with all detainees will decrease the risk to staff of acquiring such diseases and will help to stop unnecessary overreaction and unjustified disclosure of sensitive information. for violent or sexual assault victims, a more open-minded approach is needed (see also chapter 3) . if the assailant is known, then it may be possible to make an informed assessment of the risk of certain diseases by ascertaining his or her lifestyle. however, if the assailant is unknown, then it is wise to assume the worst. this chapter highlights the most common infections encountered by the forensic physician. it dispels "urban myths" and provides a sensible approach for achieving effective management. the risk of exposure to infections, particularly blood-borne viruses (bbvs), can be minimized by adopting measures that are considered good practice in the united kingdom, the united states, and australia (1) (2) (3) . forensic physicians or other health care professionals should wash their hands before and after contact with each detainee or victim. police officers should be encouraged to wash their hands after exposure to body fluids or excreta. all staff should wear gloves when exposure to body fluids, mucous membranes, or nonintact skin is likely. gloves should also be worn when cleaning up body fluids or handling clinical waste, including contaminated laundry. single-use gloves should only be used and must conform to the requirements of european standard 455 or equivalent (1) (2) (3) . a synthetic alternative conforming to the same standards should also be available for those who are allergic to latex. all staff should cover any fresh wounds (<24 hours old), open skin lesions, or breaks in exposed skin with a waterproof dressing. gloves cannot prevent percutaneous injury but may reduce the chance of acquiring a bloodborne viral infection by limiting the volume of blood inoculated. gloves should only be worn when taking blood, providing this does not reduce manual dexterity and therefore increase the risk of accidental percutaneous injury. ideally, a designated person should be allocated to ensure that the clinical room is kept clean and that sharps containers and clinical waste bags are removed regularly. clinical waste must be disposed of in hazard bags and should never be overfilled. after use, the clinical waste should be doublebagged and sealed with hazard tape. the bags should be placed in a designated waste disposal (preferably outside the building) and removed by a professional company. when cells are contaminated with body fluids, a professional cleaning company should be called to attend as soon as possible. until such time, the cell should be deemed "out of action." there is a legal requirement in the united kingdom under the environmental protection act (1990) and the control of substances hazardous to health regulations 1994 to dispose of sharps in an approved container. in the united states, the division of health care quality promotion on the centers for disease control and prevention (cdc) web site provides similar guidance. in custody, where sharps containers are transported off site, they must be of an approved type. in the united kingdom, such a requirement is contained within the carriage of dangerous goods (classification, packaging and labelling) and use of transportable pressure receptacles regulations 1996. these measures help to minimize the risk of accidental injury. further precautions include wearing gloves when handling sharps and never bending, breaking, or resheathing needles before disposal. sharps bins should never be overfilled, left on the floor, or placed above the eye level of the smallest member of staff. any bedding that is visibly stained with body fluids should be handled with gloves. there are only three acceptable ways of dealing with contaminated bedding: the bbvs that present the most cross-infection hazard to staff or victims are those associated with persistent viral replication and viremia. these include hbv, hcv, hepatitis d virus (hdv), and hiv. in general, risks of transmission of bbvs arise from the possible exposure to blood or other body fluids. the degree of risk varies with the virus concerned and is discussed under the relevant sections. figure 1 illustrates the immediate management after a percutaneous injury, mucocutaneous exposure, or exposure through contamination of fresh cuts or breaks in the skin. hbv is endemic throughout the world, with populations showing a varying degree of prevalence. approximately two thousand million people have been infected with hbv, with more than 350 million having chronic infection. worldwide, hbv kills about 1 million people each year. with the development of a safe and effective vaccine in 1982, the world health organization (who) recommended that hbv vaccine should be incorporated into national immunization programs by 1995 in those countries with a chronic infection rate of 8% or higher, and into all countries by 1997. although 135 countries had achieved this goal by the end of 2001, the poorest countries-often the ones with the highest prevalence-have been unable to afford it. in particular these include china, the indian subcontinent, and sub-saharan africa. people in the early stages of infection or with chronic carrier status (defined by persistence of hepatitis b surface antigen [hbsag] beyond 6 mo) can transmit infection. in the united kindgom, the overall prevalence of chronic hbv is approx 0.2-0.3% (6, 7) . a detailed breakdown is shown in table 1 . the incubation period is approx 6 weeks to 6 months. as the name suggests, the virus primarily affects the liver. typical symptoms include malaise, anorexia, nausea, mild fever, and abdominal discomfort and may last from 2 days to 3 weeks before the insidious onset of jaundice. joint pain and skin rashes may also occur as a result of immune complex formation. infections in the newborn are usually asymptomatic. * in the united kingdom, written consent from the contact must be sent with the sample, countersigned by the health care practitioner and, preferably, an independent police officer. the majority of patients with acute hbv make a full recovery and develop immunity. after acute infection, approx 1 in 300 patients develop liver failure, which may result in death. chronic infection develops in approx 90% of neonates, approx 50% of children, and between 5 and 10% of adults. neonates and children are usually asymptomatic. adults may have only mild symptoms or may also be asymptomatic. approximately 15-25% of chronically infected individuals (depending on age of acquisition) will develop cirrhosis over a number of years. this may also result in liver failure or other serious complications, including hepatocellular carcinoma, though the latter is rare. the overall mortality rate of hbv is estimated at less than 5%. a person is deemed infectious if hbsag is detected in the blood. in the acute phase of the illness, this can be as long as 6 months. by definition, if hbsag persists after this time, then the person is deemed a carrier. carriers are usually infectious for life. the degree of infectivity depends on the stage of disease and the markers present table 2 . the major routes include parenteral (e.g., needlestick injuries, bites, unscreened blood transfusions, tattooing, acupuncture, and dental procedures where equipment is inadequately sterilized), mucous membrane exposure (including mouth, eyes, and genital mucous membranes), and contamination of broken skin (especially when <24 hours old). hbv is an occupational hazard for anyone who may come into contact with blood or bloodstained body fluids through the routes described. saliva alone may transmit hbv. the saliva of some people infected with hbv contains hbv-dna concentrations 1/1000-1/10,000 of that found in their serum (8) . this is especially relevant for penetrating bite wounds. infection after exposure to other body fluids (e.g., bile, urine, feces, and cerebrospinal fluid) has never been demonstrated unless the fluids are contaminated with blood. intravenous drug users who share needles or other equipment are also at risk. hbv can also be transmitted through unprotected sexual contact, whether homosexual or heterosexual. the risk is increased if blood is involved. sexual assault victims should be included in this category. evidence has shown that the virus may also be spread among members of a family through close household contact, such as through kissing and sharing toothbrushes, razors, bath towels, etc. (9) (10) (11) . this route of transmission probably applies to institutionalized patients, but there are no available data. studies of prisoners in western countries have shown a higher prevalence of antibodies to hbv and other bbvs than the general population (12) (13) (14) ; the most commonly reported risk factor is intravenous drug use. however, the real frequency of transmission of bbvs in british prisons is unknown owing to the difficulty in compiling reliable data. hbv can be transmitted vertically from mother to baby during the perinatal period. approximately 80% of babies born to mothers who have either acute or chronic hbv become infected, and most will develop chronic hbv. this has been limited by the administration of hbv vaccine to the neonate. in industrialized countries, all prenatal mothers are screened for hbv. vaccine is given to the neonate ideally within the first 12 hours of birth and at least two more doses are given at designated intervals. the who recommends this as a matter of course for all women in countries where prevalence is high. however, the practicalities of administering a vaccine that has to be stored at the correct temperature in places with limited access to medical care means that there is a significant failure of vaccine uptake and response. in industrialized countries, hbv vaccination is recommended for those who are deemed at risk of acquiring the disease. they include the following: 1. through occupational exposure. 2. homosexual/bisexual men. 3. intravenous drug users. 4. sexual partners of people with acute or chronic hbv. 5. family members of people with acute or chronic hbv. 6. newborn babies whose mothers are infected with hbv. if the mother is hbsag positive, then hepatitis b-specific immunoglobulin (hbig) should be given at the same time as the first dose of vaccine. 7. institutionalized patients and prisoners. ideally, hbv vaccine should be administered before exposure to the virus. the routine schedule consists of three doses of the vaccine given at 0, 1, and 6 months. antibody levels should be checked 8-12 weeks after the last dose. if titers are greater than10 miu/ml, then an adequate response has been achieved. in the united kingdom, this is considered to provide protection for 5-10 years. in the united states, if an initial adequate response has been achieved, then no further doses of vaccine are considered necessary. vaccine administration after exposure varies according to the timing of the incident, the degree of risk involved, and whether the individual has already been partly or fully vaccinated. an accelerated schedule when the third dose is given 2 months after the first dose with a booster 1 year later is used to prevent postnatal transmission. where risks are greatest, it may be necessary to use a rapid schedule. the doses are given at 0, 7, and 21-28 days after presentation, again with a booster dose at 6-12 months. this schedule is currently only licensed with engerix b. hbig may also be used either alone or in conjunction with vaccine. the exact dose given is age dependent but must be administered by deep intramuscular injection in a different site from the vaccine. in an adult, this is usually into the gluteus muscle. hbig is given in conjunction with the first dose of vaccine to individuals who are deemed at high risk of acquiring disease and the incident occurred within 72 hours of presentation. it is also used for neonates born to mothers who are hbeag-positive. between 5 and 10% of adults fail to respond to the routine schedule of vaccine. a further full course of vaccine should be tried before deeming the patients as "nonresponders." such individuals involved in a high-risk exposure should be given two doses of hbig administered 1 mo apart. ideally, the first dose should be given within 48 hours after exposure and no later than 2 weeks after exposure. other measures include minimizing the risk of exposure by adopting the safe working practices outlined in subheading 2. any potential exposures should be dealt with as soon as possible. in industrialized countries blood, blood products, and organs are routinely screened for hbv. intravenous drug users should be encouraged to be vaccinated and to avoid sharing needles or any other drug paraphernalia (see subheading 6.9.2.). for staff or victims in contact with disease, it is wise to have a procedure in place for immediate management and risk evaluation. an example is shown in fig. 1 . although forensic physicians are not expected to administer treatment, it is often helpful to inform persons concerned what to expect. tables 3 and 4 outline treatment protocols as used in the united kingdom. detainees with disease can usually be managed in custody. if the detainee is bleeding, then the cell should be deemed out of action after the detainee has left until it can be professionally cleaned. contaminated bedding should be dealt with as described in subheading 2.2. if the detainee has chronic hbv and is on an antiviral agent (e.g., lamivudine), then the treatment course should be continued, if possible. hcv is endemic in most parts of the world. approximately 3% (200 million) of the world's population is infected with hcv (15) . for many countries, no reliable prevalence data exist. seroprevalence studies conducted among blood donors have shown that the highest prevalence exists in egypt (17-26%). this has been ascribed to contaminated needles used in the treatment of schistosomiasis conducted between the 1950s and the 1980s (16) . intermediate prevalence (1-5%) exists in eastern europe, the mediterranean, the middle east, the indian subcontinent, and parts of africa and asia. in western europe, most of central america, australia, and limited regions in africa, including south africa, the prevalence is low (0.2-0.5%). previously, america was included in the low prevalence group, but a report published in 2003 (17) indicated that almost 4 million americans (i.e., 1.8% of the population) have antibody to hcv, representing either ongoing or previous infection. it also states that hcv accounts for approx 15% of acute viral hepatitis in america. the lowest prevalence (0.01-0.1%) has been found in the united kingdom and scandinavia. however, within any country, there are certain groups that have a higher chance of carrying hcv. these united kingdom figures are given in table 5 . after an incubation period of 6-8 weeks, the acute phase of the disease lasts approx 2-3 years. unlike hepatitis a (hav) or hbv, the patient is usually asymptomatic; therefore, the disease is often missed unless the individual has reported a specific exposure and is being monitored. other cases are found by chance, when raised liver enzymes are found on a routine blood test. a "silent phase" follows the acute phase when the virus lies dormant and the liver enzymes are usually normal. this period lasts approx 10-15 years. reactivation may then occur. subsequent viral replication damages the hepatocytes, and liver enzymes rise to moderate or high levels. eighty percent of individuals who are hcv antibody-positive are infectious, regardless of the levels of their liver enzymes. approximately 80% of people develop chronic infection, one-fifth of whom progress to cirrhosis. there is a much stronger association with hepatocellular carcinoma than with hbv. an estimated 1.25-2.5% of patients with hcv-related cirrhosis develop liver cancer (18) . less than 2% of chronic cases resolve spontaneously. approximately 75% of cases are parenteral (e.g., needle-stick, etc.) (19) . transmission through the sexual route is not common and only appears to be significant if there is repeated exposure with one or more people infected with hcv. mother-to-baby transmission is considered to be uncommon but has been reported (20) . theoretically, household spread is also possible through sharing contaminated toothbrushes or razors. because the disease is often silent, there is a need to raise awareness among the general population on how to avoid infection and to encourage high-risk groups to be tested. health care professionals should also be educated to avoid occupationally acquired infection. an example of good practice blood or blood-stained body fluids need to be involved for a risk to occur. saliva alone is not deemed to be a risk. the risk from a single needlestick incident is 1.8% (range 0-7%). contact through a contaminated cut is estimated at 1%. for penetrating bite injuries, there are no data, but it is only considered a risk if blood is involved. blood or blood-stained body fluids have to be involved in transmission through mucous membrane exposure. this may account for the lower-than-expected prevalence among the gay population. follow the immediate management flow chart, making sure all available information is obtained. inform the designated hospital and/or specialist as soon as possible. if the contact is known and is believed to be immunocompromised and he or she has consented to provide a blood sample, it is important to tell the specialist, because the antibody tests may be spuriously negative. in this instance, a different test should be used (polymerase chain reaction [pcr] , which detects viral rna). the staff member/victim will be asked to provide a baseline sample of blood with further samples at 4-6 weeks and again at 12 weeks. if tests are negative at 12 weeks but the risk was deemed high, then follow-up may continue for up to 24 weeks. if any of the follow-up samples is positive, then the original baseline sample will be tested to ascertain whether the infection was acquired through the particular exposure. it is important to emphasize the need for prompt initial attendance and continued monitoring, because treatment is now available. a combination of ribavirin (antiviral agent and interferon a-2b) (18) or the newer pegylated interferons (15) may be used. this treatment is most effective when it is started early in the course of infection. unless they are severely ill, detainees can be managed in custody. special precautions are only required if they are bleeding. custody staff should wear gloves if contact with blood is likely. contaminated bedding should be handled appropriately, and the cell cleaned professionally after use. this defective transmissible virus was discovered in 1977 and requires hbv for its own replication. it has a worldwide distribution in association with hbv, with approx 15 million people infected. the prevalence of hdv is higher in southern italy, the middle east, and parts of africa and south america, occurring in more than 20% of hbv carriers who are asymptomatic and more than 60% of those with chronic hbv-related liver disease. despite the high prevalence of hbv in china and south east asia, hdv in these countries is rare. hdv is associated with acute (coinfection) and chronic hepatitis (superinfection) and can exacerbate pre-existing liver damage caused by hbv. the routes of transmission and at-risk groups are the same as for hbv. staff/victims in contact with a putative exposure and detainees with disease should be managed as for hbv. interferon-î± (e.g., roferon) can be used to treat patients with chronic hbv and hdv (21) , although it would not be practical to continue this treatment in the custodial setting. hiv was first identified in 1983, 2 years after the first reports were made to the cdc in atlanta, ga, of an increased incidence of two unusual diseases (kaposi's sarcoma and pneumocystis carinii pneumonia) occurring among the gay population in san francisco. the scale of the virus gradually emerged over the years and by the end of 2002, there were an estimated 42 million people throughout the world living with hiv or acquired immunodeficiency syndrome (aids). more than 80% of the world's population lives in africa and india. a report by the joint united nations programme on hiv/aids and the who in 2002 stated that one in five adults in lesotho, malawi, mozambique, swaziland, zambia, and zimbabwe has hiv or aids. there is also expected to be a sharp rise in cases of hiv in china, papua new guinea, and other countries in asia and the pacific during the next few years. in the united kingdom, by the end of 2002, the cumulative data reported that there were 54,261 individuals with hiv, aids (including deaths from aids) reported, though this is likely to be an underestimate (22) . from these data, the group still considered at greatest risk of acquiring hiv in the united kingdom is homosexual/bisexual men, with 28,835 of the cumulative total falling into this category. among intravenous drug users, the overall estimated prevalence is 1%, but in london the figure is higher at 3.7% (6, 23) . in the 1980s, up to 90% of users in edinburgh and dundee were reported to be hiv positive, but the majority have now died. individuals arriving from africa or the indian subcontinent must also be deemed a risk group because 80% of the world's total cases occur in these areas. the predominant mode of transmission is through unprotected heterosexual intercourse. the incidence of mother-to-baby transmission has been estimated at 15% in europe and approx 45% in africa. the transmission rates among african women are believed to be much higher owing to a combination of more women with end-stage disease with a higher viral load and concomitant placental infection, which renders it more permeable to the virus (24, 25) . the use of antiretroviral therapy during pregnancy, together with the advice to avoid breastfeeding, has proven efficacious in reducing both vertical and horizontal transmission among hiv-positive women in the western world. for those in third-world countries, the reality is stark. access to treatment is limited, and there is no realistic substitute for breast milk, which provides a valuable source of antibodies to other life-threatening infections. patients receiving blood transfusions, organs, or blood products where screening is not routinely carried out must also be included. the incubation is estimated at 2 weeks to 6 months after exposure. this depends, to some extent, on the ability of current laboratory tests to detect hiv antibodies or viral antigen. the development of pcr for viral rna has improved sensitivity. during the acute phase of the infection, approx 50% experience a seroconversion "flu-like" illness. the individual is infectious at this time, because viral antigen (p24) is present in the blood. as antibodies start to form, the viral antigen disappears and the individual enters the latent phase. he or she is noninfectious and remains well for a variable period of time (7-15 years). development of aids marks the terminal phase of disease. viral antigen reemerges, and the individual is once again infectious. the onset of aids has been considerably delayed with the use of antiretroviral treatment. parenteral transmission included needlestick injuries, bites, unscreened blood transfusions, tattooing, acupuncture, and dental procedures where equipment is inadequately sterilized. risk of transmission is increased with deep penetrating injuries with hollow bore needles that are visibly bloodstained, especially when the device has previously been in the source patient's (contact) artery or vein. other routes include mucous membrane exposure (eyes, mouth, and genital mucous membranes) and contamination of broken skin. the higher the viral load in the contact, the greater the risk of transmission. this is more likely at the terminal stage of infection. hiv is transmitted mainly through blood or other body fluids that are visibly blood stained, with the exception of semen, vaginal fluid, and breast milk. saliva alone is most unlikely to transmit infection. therefore, people who have sustained penetrating bite injuries can be reassured that they are not at risk, providing the contact was not bleeding from the mouth at the time. the risk from a single percutaneous exposure from a hollow bore needle is low, and a single mucocutaneous exposure is even less likely to result in infection. the risk from sexual exposure varies, although it appears that there is a greater risk with receptive anal intercourse compared with receptive vaginal intercourse (26). high-risk fluids include blood, semen, vaginal fluid, and breast milk. there is little or no risk from saliva, urine, vomit, or feces unless they are visibly bloodstained. other fluids that constitute a theoretical risk include cerebrospinal, peritoneal, pleural, synovial, or pericardial fluid. management in custody of staff/victims in contact with disease includes following the immediate management flow chart (fig. 1 ) and contacting the designated hospital/specialist with details of the exposure. where possible, obtain a blood sample from the contact. regarding hbv and hcv blood samples in the united kingdom, they can only be taken with informed consent. there is no need for the forensic physician to go into details about the meaning of the test, but the contact should be encouraged to attend the genitourinary department (or similar) of the designated hospital to discuss the test results. should the contact refuse to provide a blood sample, then any information about his or her lifestyle, ethnic origin, state of health, etc., may be useful for the specialist to decide whether postexposure prophylaxis (pep) should be given to the victim. where only saliva is involved in a penetrating bite injury, there is every justification to reassure the victim that he or she is not at risk. if in doubt, then always refer. in the united kingdom, the current recommended regime for pep is combivir (300 mg of zidovudine twice daily plus 150 mg of lamivudine twice daily) and a protease inhibitor (1250 mg of nelfanivir twice daily) given for 4 weeks (27) . it is only given after a significant exposure to a high-risk fluid or any that is visibly bloodstained and the contact is known or is highly likely to be hiv positive. ideally, treatment should be started within an hour after exposure, although it will be considered for up to 2 weeks. it is usually given for 4 weeks, unless the contact is subsequently identified as hiv negative or the "victim" develops tolerance or toxicity occurs. weekly examinations of the "victim" should occur during treatment to improve adherence, monitor drug toxicity, and deal with other concerns. other useful information that may influence the decision whether to treat with the standard regimen or use alternative drugs includes interaction with other medications that the "victim" may be taking (e.g., phenytoin or antibiotics) or if the contact has been on antiretroviral therapy or if the "victim" is pregnant. during the second or third trimester, only combivir would be used, because there is limited experience with protease inhibitors. no data exist regarding the efficacy of pep beyond occupational exposure (27) . pep is not considered for exposure to low-or no-risk fluids through any route or where the source is unknown (e.g., a discarded needle). despite the appropriate use and timing of pep, there have been reports of failure (28, 29) . unless they are severely ill, detainees can be kept in custody. every effort should be made to continue any treatment they may be receiving. apply universal precautions when dealing with the detainee, and ensure that contaminated cells and/or bedding are managed appropriately. cases of this highly infectious disease occur throughout the year but are more frequent in winter and early spring. this seasonal endemicity is blurring with global warming. in the united kingdom, the highest prevalence occurs in the 4-to 10-years age group. ninety percent of the population over the age of 40 is immune (30) . a similar prevalence has been reported in other parts of western europe and the united states. in south east asia, varicella is mainly a disease of adulthood (31) . therefore, people born in these countries who have moved to the united kingdom are more likely to be susceptible to chicken pox. there is a strong correlation between a history of chicken pox and serological immunity (97-99%). most adults born and living in industrialized countries with an uncertain or negative history of chicken pox are also seropositive (70-90%). in march 1995, a live-attenuated vaccine was licensed for use in the united states and a policy for vaccinating children and susceptible health care personnel was introduced. in summer 2002, in the united kingdom, glaxosmithkline launched a live-attenuated vaccine called varilrix. in december 2003, the uk department of health, following advice from the joint committee on vaccination and immunisation recommended that the vaccine be given for nonimmune health care workers who are likely to have direct contact with individuals with chicken pox. any health care worker with no previous history of chicken pox should be screened for immunity, and if no antibodies are found, then they should receive two doses of vaccine 4-8 weeks apart. the vaccine is not currently recommended for children and should not be given during pregnancy. following an incubation period of 10-21 days (this may be shorter in the immunocompromised), there is usually a prodromal "flu-like" illness before the onset of the rash. this coryzal phase is more likely in adults. the lesions typically appear in crops, rapidly progressing from red papules through vesicles to open sores that crust over and separate by 10 days. the distribution of the rash is centripetal (i.e., more over the trunk and face than on the limbs). this is the converse of small pox. in adults, the disease is often more severe, with lesions involving the scalp and mucous membranes of the oropharynx. in children, the disease is often mild, unless they are immunocompromised, so they are unlikely to experience complications. in adults (defined as 15 yr or older), the picture is rather different (32) . secondary bacterial infection is common but rarely serious. there is an increased likelihood of permanent scarring. hemorrhagic chicken pox typically occurs on the second or third day of the rash. usually, this is limited to bleeding into the skin, but lifethreatening melena, epistaxis, or hematuria can occur. varicella pneumonia ranges from patchy lung consolidation to overt pneumonitis and occurs in 1 in 400 cases (33) . it can occur in previously healthy individuals (particularly adults), but the risk is increased in those who smoke. immunocompromised people are at the greatest risk of developing this complication. it runs a fulminating course and is the most common cause of varicella-associated death. fibrosis and permanent respiratory impairment may occur in those who survive. any suspicion of lung involvement is an indication for immediate treatment, and any detainee or staff member should be sent to hospital. involvement of the central nervous system includes several conditions, including meningitis, guillain-barre, and encephalitis. the latter is more common in the immunocompromised and can be fatal. this is taken as 3 days before the first lesions appear to the end of new vesicle formation and the last vesicle has crusted over. this typically is 5-7 days after onset but may last up to 14 days. the primary route is through direct contact with open lesions of chicken pox. however, it is also spread through aerosol or droplets from the respiratory tract. chicken pox may also be acquired through contact with open lesions of shingles (varicella zoster), but this is less likely because shingles is less infectious than chicken pox. nonimmune individuals are at risk of acquiring disease. approximately 10% of the adult population born in the united kingdom and less than 5% of adults in the united states fall into this category. therefore, it is more likely that if chicken pox is encountered in the custodial setting, it will involve people born outside the united kingdom (particularly south east asia) or individuals who are immunocompromised and have lost immunity. nonimmune pregnant women are at risk of developing complications. pneumonia can occur in up to 10% of pregnant women with chicken pox, and the severity is increased in later gestation (34) . they can also transmit infection to the unborn baby (35) . if infection is acquired in the first 20 weeks, there is a less than 3% chance of it leading to congenital varicella syndrome. infection in the last trimester can lead to neonatal varicella, unless more than 7 days elapse between onset of maternal rash and delivery when antibodies have time to cross the placenta leading to either mild or inapparent infection in the newborn. in this situation, varicella immunoglobulin (vzig) should be administered to the baby as soon as possible after birth (36). staff with chicken pox should stay off work until the end of the infective period (approx 7-14 days). those in contact with disease who are known to be nonimmune or who have no history of disease should contact the designated occupational health physician. detainees with the disease should not be kept in custody if at all possible (especially pregnant women). if this is unavoidable, then nonimmune or immunocompromised staff should avoid entering the cell or having close contact with the detainee. nonimmune, immunocompromised, or pregnant individuals exposed to chickenpox should seek expert medical advice regarding the administration of vzig. aciclovir (or similar antiviral agent) should be given as soon as possible to people who are immunocompromised with chicken pox. it should also be considered for anyone over 15 years old because they are more likely to develop complications. anyone suspected of severe complications should be sent straight to the hospital. after chicken pox, the virus lies dormant in the dorsal root or cranial nerve ganglia but may re-emerge and typically involves one dermatome (37) . the site of involvement depends on the sensory ganglion initially involved. shingles is more common in individuals over the age of 50 years, except in the immunocompromised, when attacks can occur at an earlier age. the latter are also more susceptible to secondary attacks and involvement of more than one dermatome. bilateral zoster is even rarer but is not associated with a higher mortality. in the united kingdom, there is an estimated incidence of 1.2-3.4 per 1000-person years (38). there may be a prodromal period of paraesthesia and burning or shooting pains in the involved segment. this is usually followed by the appearance of a band of vesicles. rarely, the vesicles fail to appear and only pain is experienced. this is known as zoster sine herpete. in individuals who are immuno-compromised, disease may be prolonged and dissemination may occur but is rarely fatal. shingles in pregnancy is usually mild. the fetus is only affected if viremia occurs before maternal antibody has had time to cross the placenta. the most common complication of shingles is postherpetic neuralgia, occurring in approx 10% of cases. it is defined as pain lasting more than 120 days from rash onset (39) . it is more frequent in people over 50 years and can lead to depression. it is rare in children, including those who are immunocompromised. infection of the brain includes encephalitis, involvement of motor neurones leading to ptosis, paralysis of the hand, facial palsy, or contralateral hemiparesis. involvement of the oculomotor division of the trigeminal ganglion can cause serious eye problems, including corneal scarring. shingles is far less infectious than chicken pox and is only considered to be infectious up to 3 days after lesions appear. shingles is only infectious after prolonged contact with lesions. unlike chickenpox, airborne transmission is not a risk. individuals who are immunocompromised may reactivate the dormant virus and develop shingles. people who have not had primary varicella are at risk of developing chickenpox after prolonged direct contact with shingles. despite popular belief, it is untrue that people who are immunocompetent who have had chicken pox develop shingles when in contact with either chicken pox or shingles. such occurrences are merely coincidental, unless immunity is lowered. staff with shingles should stay off work until the lesions are healed, unless they can be covered. staff who have had chickenpox are immune (including pregnant women) and are therefore not at risk. if they are nonimmune (usually accepted as those without a history of chicken pox), they should avoid prolonged contact with detainees with shingles. pregnant nonimmune women should avoid contact altogether. detainees with the disease may be kept in custody, and any exposed lesions should be covered. it is well documented that prompt treatment attenuates the severity of the disease, reduces the duration of viral shedding, hastens lesion healing, and reduces the severity and duration of pain. it also reduces the likelihood of developing postherpetic neuralgia (40) . prompt treatment with famciclovir (e.g., 500 mg three times a day for 7 days) should be initiated if the onset is 3 d ays or less. it should also be considered after this time if the detainee is over age 50 years. pregnant detainees with shingles can be reassured that there is minimal risk for both the mother and the unborn child. expert advice should be given before initiating treatment for the mother. this tiny parasitic mite (sarcoptes scabiei) has infested humans for more than 2500 years. experts estimate that in excess of 300 million cases occur worldwide each year. the female mite burrows into the skin, especially around the hands, feet, and male genitalia, in approx 2.5 min. eggs are laid and hatch into larvae that travel to the skin surface as newly developed mites. the mite causes intense itching, which is often worse at night and is aggravated by heat and moisture. the irritation spreads outside the original point of infection resulting from an allergic reaction to mite feces. this irritation may persist for approx 2 weeks after treatment but can be alleviated by antihistamines. crusted scabies is a far more severe form of the disease. large areas of the body may be involved. the crusts hide thousands of live mites and eggs, making them difficult to treat. this so-called norwegian scabies is more common in the elderly or the immunocompromised, especially those with hiv. after a primary exposure, it takes approx 2-6 weeks before the onset of itching. however, further exposures reduce the incubation time to approx 1-4 days. without treatment, the period of infectivity is assumed to be indefinite. with treatment, the person should be considered infectious until the mites and eggs are destroyed, usually 7-10 days. crusted scabies is highly infectious. because transmission is through direct skin-to-skin contact with an infected individual, gloves should be worn when dealing with individuals suspected of infestation. usually prolonged contact is needed, unless the person has crusted scabies, where transmission occurs more easily. the risk of transmission is much greater in households were repeated or prolonged contact is likely. because mites can survive in bedding or clothing for up to 24 hour, gloves should also be worn when handling these items. bedding should be treated using one of the methods in subheading 2.2. professional cleaning of the cell is only warranted in cases of crusted scabies. the preferred treatment for scabies is either permethrin cream (5%) or aqueous malathion (0.5%) (41) . either treatment has to be applied to the whole body and should be left on for at least 8 hours in the case of permethrin and 24 hours for malathion before washing off. lindane is no longer considered the treatment of choice, because there may be complications in pregnancy (42) . treatment in custody may not be practical but should be considered when the detainee is believed to have norwegian scabies. like scabies, head lice occur worldwide and are found in the hair close to the scalp. the eggs, or nits, cling to the hair and are difficult to remove, but they are not harmful. if you see nits, then you can be sure that lice are also present. the latter are best seen when the hair is wet. the lice bite the scalp and suck blood, causing intense irritation and itching. head lice can only be passed from direct hair-to-hair contact. it is only necessary to wear gloves when examining the head for whatever reason. the cell does not need to be cleaned after use, because the lice live on or near skin. bedding may be contaminated with shed skin, so should be handled with gloves and laundered or incinerated. the presence of live lice is an indication for treatment by either physical removal with a comb or the application of an insecticide. the latter may be more practical in custody. treatment using 0.5% aqueous malathion should be applied to dry hair and washed off after 12 hours. the hair should then be shampooed as normal. crabs or body lice are more commonly found in the pubic, axillary, chest, and leg hair. however, eyelashes and eyebrows may also be involved. they are associated with people who do not bath or change clothes regularly. the person usually complains of intense itching or irritation. the main route is from person to person by direct contact, but eggs can stick to fibers, so clothing and bedding should be handled with care (see subheading 6.5.3.). staff should always wear gloves if they are likely to come into contact with any hirsute body part. clothing or bedding should be handled with gloves and either laundered or incinerated. treatment of a detainee in custody is good in theory but probably impractical because the whole body has to be treated. fleas lay eggs on floors, carpets, and bedding. in the united kingdom, most flea bites come from cats or dogs. the eggs and larvae fleas can survive for months and are reactivated in response to animal or human activity. because animal fleas jump off humans after biting, most detainees with flea bites will not have fleas, unless they are human fleas. treatment is only necessary if fleas are seen. after use, the cell should be vacuumed and cleaned with a proprietary insecticide. any bedding should be removed wearing gloves, bagged, and either laundered or incinerated. bedbugs live and lay eggs on walls, floors, furniture, and bedding. if you look carefully, fecal tracks may be seen on hard surfaces. if they are present for long enough, they emit a distinct odor. bedbugs are rarely found on the person but may be brought in on clothing or other personal effects. bedbugs bite at night and can cause sleep disturbance. the detainee does not need to be treated, but the cell should deemed out of use until it can be vacuumed and professionally cleaned with an insecticide solution. any bedding or clothing should be handled with gloves and disposed of as appropriate. staphylococcus aureus is commonly carried on the skin or in the nose of healthy people. approximately 25-30% of the population is colonized with the bacteria but remain well (43) . from time to time, the bacteria cause minor skin infections that usually do not require antibiotic treatment. however, more serious problems can occur (e.g., infection of surgical wounds, drug injection sites, osteomyelitis, pneumonia, or septicemia). during the last 50 years, the bacteria have become increasingly resistant to penicillin-based antibiotics (44) , and in the last 20 years, they have become resistant to an increasing number of alternative antibiotics. these multiresistant bacteria are known as methicillinresistant s. aureus (mrsa). mrsa is prevalent worldwide. like nonresistant staphylococci, it may remain undetected as a reservoir in colonized individuals but can also produce clinical disease. it is more common in individuals who are elderly, debilitated, or immunocompromised or those with open wounds. clusters of skin infections with mrsa have been reported among injecting drug users (idus) since 1981 in america (45, 46) , and more recently, similar strains have been found in the united kingdom in idus in the community (47) . this may have particular relevance for the forensic physician when dealing with idus sores. people who are immunocompetent rarely get mrsa and should not be considered at risk. the bacteria are usually spread via the hands of staff after contact with colonized or infected detainees or devices, items (e.g., bedding, towels, and soiled dressings), or environmental surfaces that have been contaminated with mrsa-containing body fluids. with either known or suspected cases (consider all abscesses/ulcers of idus as infectious), standard precautions should be applied. staff should wear gloves when touching mucous membranes, nonintact skin, blood or other body fluids, or any items that could be contaminated. they should also be encouraged to their wash hands with an antimicrobial agent regardless of whether gloves have been worn. after use, gloves should be disposed of in a yellow hazard bag and not allowed to touch surfaces. masks and gowns should only be worn when conducting procedures that generate aerosols of blood or other body fluids. because this is an unlikely scenario in the custodial setting, masks and gowns should not be necessary. gloves should be worn when handling bedding or clothing, and all items should be disposed of appropriately. any open wounds should be covered as soon as possible. the cell should be cleaned professionally after use if there is any risk that it has been contaminated. during the last decade, there has been an increasing awareness of the bacterial flora colonizing injection sites that may potentially lead to life-threatening infection (48) . in 1997, a sudden increase in needle abscesses caused by a clonal strain of group a streptococcus was reported among hospitalized idus in berne, switzerland (49) . a recent uk study showed that the predominant isolate is s. aureus, with streptococcus species forming just under one-fifth (50% î²-hemolytic streptococci) (50) . there have also been reports of both nonsporing and sporing anerobes (e.g., bacteroides and clostridia species, including clostridia botulinum) (51, 52) . in particular, in 2000, laboratories in glasgow were reporting isolates of clostridium novyi among idus with serious unexplained illness. by june 12, 2000, a total of 42 cases (18 definite and 24 probable) had been reported. a definite case was defined as an idu with both severe local and systemic inflammatory reactions. a probable case was defined as an idu who presented to the hospital with an abscess or other significant inflammation at an injecting site and had either a severe inflammatory process at or around an injection site or a severe systemic reaction with multiorgan failure and a high white cell count (53) . in the united kingdom, the presence of c. botulinum in infected injection sites is a relatively new phenomenon. until the end of 1999, there were no cases reported to the public health leadership society. since then, the number has increased, with a total of 13 cases in the united kingdom and ireland being reported since the beginning of 2002. it is believed that these cases are associated with contaminated batches of heroin. simultaneous injection of cocaine increases the risk by encouraging anerobic conditions. anerobic flora in wounds may have serious consequences for the detainee, but the risk of transmission to staff is virtually nonexistent. staff should be reminded to wear gloves when coming into contact with detainees with infected skin sites exuding pus or serum and that any old dressings found in the cell should be disposed of into the yellow bag marked "clinical waste" in the medical room. likewise, any bedding should be bagged and laundered or incinerated after use. the cell should be deemed out of use and professionally cleaned after the detainee has gone. the health care professional managing the detainee should clean and dress open wounds as soon as possible to prevent the spread of infection. it may also be appropriate to start a course of antibiotics if there is abscess formation or signs of cellulites and/or the detainee is systemically unwell. however, infections can often be low grade because the skin, venous, and lymphatic systems have been damaged by repeated penetration of the skin. in these cases, signs include lymphedema, swollen lymph glands, and darkly pigmented skin over the area. fever may or may not be present, but septicemia is uncommon unless the individual is immunocompromised (e.g., hiv positive). co-amoxiclav is the preferred treatment of choice because it covers the majority of staphylococci, streptococci, and anerobes (the dose depends on the degree of infection). necrotizing fasciitis and septic thrombophlebitis are rare but life-threatening complications of intravenous drug use. any detainee suspected of either of these needs hospital treatment. advice about harm reduction should also be given. this includes encouraging drug users to smoke rather than inject or at least to advise them to avoid injecting into muscle or skin. although most idus are aware of the risk of sharing needles, they may not realize that sharing any drug paraphernalia could be hazardous. advice should be given to use the minimum amount of citric acid to dissolve the heroin because the acid can damage the tissue under the skin, allowing bacteria to flourish. drugs should be injected at different sites using fresh works for each injection. this is particularly important when "speedballing" because crack cocaine creates an anerobic environment. medical help should be requested if any injection site become painful and swollen or shows signs of pus collecting under the skin. because intravenous drug users are at increased risk of acquiring hbv and hav, they should be informed that vaccination against both diseases is advisable. another serious but relatively rare problem is the risk from broken needles in veins. embolization can take anywhere from hours to days or even longer if it is not removed. complications may include endocarditis, pericarditis, or pulmonary abscesses (54, 55) . idus should be advised to seek medical help as soon as possible, and should such a case present in custody, then send the detainee straight to the hospital. the forensic physician may encounter bites in the following four circumstances: a detailed forensic examination of bites is given in chapter 4. with any bite that has penetrated the skin, the goals of therapy are to minimize soft tissue deformity and to prevent or treat infection. in the united kingdom and the united states, dog bites represent approximately three-quarters of all bites presenting to accident and emergency departments (56) . a single dog bite can produce up to 220 psi of crush force in addition to the torsional forces as the dog shakes its head. this can result in massive tissue damage. human bites may cause classical bites or puncture wounds (e.g., impact of fists on teeth) resulting in crush injuries. an estimated 10-30% of dog bites and 9-50% of human bites lead to infection. compare this with an estimated 1-12% of nonbite wounds managed in accident and emergency departments. the risk of infection is increased with puncture wounds, hand injuries, full-thickness wounds, wounds requiring debridement, and those involving joints, tendons, ligaments or fractures. comorbid medical conditions, such as diabetes, asplenia, chronic edema of the area, liver dysfunction, the presence of a prosthetic valve or joint, and an immunocompromised state may also increase the risk of infection. infection may spread beyond the initial site, leading to septic arthritis, osteomyelitis, endocarditis, peritonitis, septicemia, and meningitis. inflammation of the tendons or synovial lining of joints may also occur. if enough force is used, bones may be fractured or the wounds may be permanently disfiguring. assessment regarding whether hospital treatment is necessary should be made as soon as possible. always refer if the wound is bleeding heavily or fails to stop when pressure is applied. penetrating bites involving arteries, nerves, muscles, tendons, the hands, or feet, resulting in a moderate to serious facial wound, or crush injuries, also require immediate referral. if management within custody is appropriate, ask about current tetanus vaccine status, hbv vaccination status, and known allergies to antibiotics. wounds that have breached the skin should be irrigated with 0.9% (isotonic) sodium chloride or ringer's lactate solution instead of antiseptics, because the latter may delay wound healing. a full forensic documentation of the bite should be made as detailed in chapter 4. note if there are clinical signs of infection, such as erythema, edema, cellulitis, purulent discharge, or regional lymphadenopathy. cover the wound with a sterile, nonadhesive dressing. wound closure is not generally recommended because data suggest that it may increase the risk of infection. this is particularly relevant for nonfacial wounds, deep puncture wounds, bites to the hand, clinically infected wounds, and wounds occurring more than 6-12 hours before presentation. head and neck wounds in cosmetically important areas may be closed if less than 12 hours old and not obviously infected. â�¢ dog bites-pasteurella canis, pasteurella multocida, s. aureus, other staphylococci, streptococcus species, eikenella corrodens, corynebacterium species, and anerobes, including bacteroides fragilis and clostridium tetani â�¢ human bites-streptococcus species, s. aureus, e. corrodens, and anerobes, including bacteroides (often penicillin resistant), peptostreptococci species, and c. tetani. tuberculosis (tb) and syphilis may also be transmitted. â�¢ dog bites-outside of the united kingdom, australia, and new zealand, rabies should be considered. in the united states, domestic dogs are mostly vaccinated against rabies (57) , and police dogs have to be vaccinated, so the most common source is from racoons, skunks, and bats. â�¢ human bites-hbv, hbc, hiv, and herpes simplex. antibiotics are not generally needed if the wound is more than 2 days old and there is no sign of infection or in superficial noninfected wounds evaluated early that can be left open to heal by secondary intention in compliant people with no significant comorbidity (58) . antibiotics should be considered with high-risk wounds that involve the hands, feet, face, tendons, ligaments, joints, or suspected fractures or for any penetrating bite injury in a person with diabetes, asplenia, or cirrhosis or who is immunosuppressed. coamoxiclav (amoxycillin and clavulanic acid) is the first-line treatment for mild-moderate dog or human bites resulting in infections managed in primary care. for adults, the recommended dose is 500/125 mg three times daily and for children the recommended does is 40 mg/kg three times daily (based on amoxycillin component). treatment should be continued for 10-14 days. it is also the first-line drug for prophylaxis when the same dose regimen should be prescribed for 5-7 days. if the individual is known or suspected to be allergic to penicillin, a tetracycline (e.g., doxycycline 100 mg twice daily) and metronidazole (500 mg three times daily) or an aminoglycoside (e.g., erythromycin) and metronidazole can be used. in the united kingdom, doxycycline use is restricted to those older than 12 years and in the united states to those older than 8 years old. specialist advice should be sought for pregnant women. anyone with severe infection or who is clinically unwell should be referred to the hospital. tetanus vaccine should be given if the primary course or last booster was more than 10 years ago. human tetanus immunoglobulin should be considered for tetanus-prone wounds (e.g., soil contamination, puncture wounds, or signs of devitalized tissue) or for wounds sustained more than 6 hours old. if the person has never been immunized or is unsure of his or her tetanus status, a full three-dose course, spaced at least 1 month apart, should be given. penetrating bite wounds that involve only saliva may present a risk of hbv if the perpetrator belongs to a high-risk group. for management, see subheadings 5.1.6. and 5.1.7. hcv and hiv are only a risk if blood is involved. the relevant management is dealt with in subheadings 5.2.5. and 5.4.6. respiratory tract infections are common, usually mild, and self-limiting, although they may require symptomatic treatment with paracetamol or a nonsteroidal antiinflammatory. these include the common cold (80% rhinoviruses and 20% coronaviruses), adenoviruses, influenza, parainfluenza, and, during the summer and early autumn, enteroviruses. special attention should be given to detainees with asthma or the who are immunocompromised, because infection in these people may be more serious particularly if the lower respiratory tract is involved. the following section includes respiratory pathogens of special note because they may pose a risk to both the detainee and/or staff who come into close contact. there are five serogroups of neisseria meningitidis: a, b, c, w135, and y. the prevalence of the different types varies from country to country. there is currently no available vaccine against type b, but three other vaccines (a+c, c, and acwy) are available. overall, 10% of the uk population carry n. meningitidis (25% in the 15-19 age group) (59) . in the united kingdom, most cases of meningitis are sporadic, with less than 5% occurring as clusters (outbreaks) amongst school children. between 1996 and 2000, 59% of cases were group b, 36% were group c, and w135 and a accounted for 5%. there is a seasonal variation, with a high level of cases in winter and a low level in the summer. the greatest risk group are the under 5 year olds, with a peak incidence under 1 year old. a secondary peak occurs in the 15-to 19-year-old age group. in sub-saharan africa, the disease is more prevalent in the dry season, but in many countries, there is background endemicity year-round. the most prevalent serogroup is a. routine vaccination against group c was introduced in the united kingdom november 1999 for everybody up to the age of 18 years old and to all firstyear university students. this has since been extended to include everyone under the age of 25 years old. as a result of the introduction of the vaccination program, there has been a 90% reduction of group c cases in those younger than under 18 years and an 82% reduction in those under 1 year old (60, 61) . an outbreak of serogroup w135 meningitis occurred among pilgrims on the hajj in 2000. cases were reported from many countries, including the united kingdom. in the united kingdom, there is now an official requirement to be vaccinated with the quadrivalent vaccine (acwy vax) before going on a pilgrimage (hajj or umra), but illegal immigrants who have not been vaccinated may enter the country (62). after an incubation period of 3-5 days (63,64) , disease onset may be either insidious with mild prodromal symptoms or florid. early symptoms and signs include malaise, fever, and vomiting. sever headache, neck stiffness, photophobia, drowsiness, and a rash may develop. the rash may be petechial or purpuric and characteristically does not blanche under pressure. meningitis in infants is more likely to be insidious in onset and lack the classical signs. in approx 15-20% of cases, septicemia is the predominant feature. even with prompt antibiotic treatment, the case fatality rate is 3-5% in meningitis and 15-20% in those with septicemia. (65). a person should be considered infectious until the bacteria are no longer present in nasal discharge. with treatment, this is usually approx 24 hour. the disease is spread through infected droplets or direct contact from carriers or those who are clinically ill. it requires prolonged and close contact, so it is a greater risk for people who share accommodation and utensils and kiss. it must also be remembered that unprotected mouth-to-mouth resuscitation can also transmit disease. it is not possible to tell if a detainee is a carrier. nevertheless, the risk of acquiring infection even from an infected and sick individual is low, unless the individual has carried out mouth-to-mouth resuscitation. any staff member who believes he or she has been placed at risk should report to the occupational health department (or equivalent) or the nearest emergency department at the earliest opportunity for vaccination. if the detainee has performed mouth-to-mouth resuscitation, prophylactic antibiotics should be given before receiving vaccination. rifampicin, ciprofloxacin, and ceftriaxone can be used; however, ciprofloxacin has numerous advantages (66) . only a single dose of 500 mg (adults and children older than 12 years) is needed and has fewer side effects and contraindications than rifampicin. ceftriaxone has to be given by injection and is therefore best avoided in the custodial setting. if the staff member is pregnant, advice should be sought from a consultant obstetrician, because ciprofloxacin is not recommended (67) . for anyone dealing regularly with illegal immigrants (especially from the middle east or sub-saharan africa) (e.g., immigration services, custody staff at designated stations, medical personnel, and interpreters), should consider being vaccinated with acwy vax. a single injection provides protection for 3 years. detainees suspected of disease should be sent directly to the hospital. human tb is caused by infection with mycobacterium tuberculosis, mycobacterium bovis, or mycobacterium africanum. it is a notifiable disease under legislation specific to individual countries; for example, in the united kingdom, this comes under the public health (control of disease) act of 1984. in 1993, the who declared tb to be a global emergency, with an estimated 7-8 million new cases and 3 million deaths occurring each year, the majority of which were in asia and africa. however, these statistics are likely to be an underestimate because they depend on the accuracy of reporting, and in poorer countries, the surveillance systems are often inadequate because of lack of funds. even in the united kingdom, there has been an inconsistency of reporting particularly where an individual has concomitant infection with hiv. some physicians found themselves caught in a dilemma of confidentiality until 1997, when the codes of practice were updated to encourage reporting with patient consent (68) . with the advent of rapid identification tests and treatment and the use of bacillus calmette-guã©rin (bcg) vaccination for prevention, tb declined during the first half of the 20th century in the united kingdom. however, since the early 1990s, numbers have slowly increased, with some 6800 cases reported in 2002 (69) . in 1998, 56% of reported cases were from people born outside the united kingdom and 3% were associated with hiv infection (70, 71) . london has been identified as an area with a significant problem. this has been attributed to its highly mobile population, the variety of ethnic groups, a high prevalence of hiv, and the emergence of drug-resistant strains (1.3% in 1998 ) (phls, unpublished data-mycobnet). a similar picture was initially found in the united states, when there was a reversal of a long-standing downward trend in 1985. however, between 1986 and 1992, the number of cases increased from 22,201 to 26,673 (72) . there were also serious outbreaks of multidrug-resistant tb (mdr-tb) in hospitals in new york city and miami (73) . factors pertinent to the overall upswing included the emergence of hiv, the increasing numbers of immigrants from countries with a high prevalence of tb, and perhaps more significantly, stopping categorical federal funding for control activities in 1972. the latter led to a failure of the public health infrastructure for tb control. since 1992, the trend has reversed as the cdc transferred most of its funds to tb surveillance and treatment program in states and large cities. from 1992 to 2001, the annual decline averaged by 7.3% (74) , but the following year this was reduced to 2%, indicating that there was no room for complacency. the who has been proactive and is redirecting funding to those countries most in need. in october 1998, a global partnership called stop tb was launched to coordinate every aspect of tb control, and by 2002, the partnership had more than 150 member states. a target was set to detect at least 70% of infectious cases by 2005. the acquisition of tb infection is not necessarily followed by disease because the infection may heal spontaneously. it may take weeks or months before disease becomes apparent, or infection may remain dormant for years before reactivation in later life especially if the person becomes debilitated or immunocompromised. contrary to popular belief, the majority of cases of tb in people who are immunocompetent pass unnoticed. of the reported cases, 75% involve the lung, whereas nonrespiratory (e.g., bone, heart, kidney, and brain) or dissemination (miliary tb) are more common in immigrant ethnic groups and individuals who are immunocompromised (75) . they are also more likely to develop resistant strains. in the general population, there is an estimated 10% lifetime risk of tb infection progressing to disease (76) . there has been an increase in the number of cases of tb associated with hiv owing to either new infection or reactivation. tb infection is more likely to progress to active tb in hiv-positive individuals, with a greater than50% lifetime risk (77) . tb can also lead to a worsening of hiv with an increase in viral load (78) . therefore, the need for early diagnosis is paramount, but it can be more difficult because pulmonary tb may present with nonspecific features (e.g., bilateral, unilateral, or lower lobe shadowing) (79). after an incubation period of 4-12 weeks, symptoms may develop (see table 6 ). the main route is airborne through infected droplets, but prolonged or close contact is needed. nonrespiratory disease is not considered a risk unless the mycobacterium is aerosolized under exceptional circumstances (e.g., during surgery) or there are open abscesses. a person is considered infectious as long as viable bacilli are found in induced sputum. untreated or incompletely treated people may be intermittently sputum positive for years. after 2 weeks of appropriate treatment, the individual is usually considered as noninfectious. this period is often extended for treatment of mdr-tb or for those with concomitant hiv. patient compliance also plays an important factor. the risk of infection is directly proportional to the degree of exposure. more severe disease occurs in individuals who are malnourished, immunocompromised (e.g., hiv), and substance misusers. people who are immunocompromised are at special risk of mdr-tb or mycobacterium avium intracellulare (mai). staff with disease should stay off work until the treatment course is complete and serial sputum samples no longer contain bacilli. staff in contact with disease who have been vaccinated with bcg are at low risk of acquiring disease but should minimize their time spent in the cell. those who have not received bcg or who are immunocompromised should avoid contact with the detainee wherever possible. detainees with mai do not pose a risk to a staff member, unless the latter is immunocompromised. any staff member who is pregnant, regardless of bcg status or type of tb, should avoid contact. anyone performing mouth-to-mouth resuscitation with a person with untreated or suspected pulmonary tb should be regarded as a household contact and should report to occupational health or their physician if no other route exists. they should also be educated regarding the symptoms of tb. anyone who is likely to come into repeated contact with individuals at risk of tb should receive bcg (if he or she has not already done so), regardless of age, even though there is evidence to suggest that bcg administered in adult life is less effective. this does not apply to individuals who are immunocompromised or pregnant women. in the latter case, vaccination should preferably be deferred until after delivery. detainees with disease (whether suspected or diagnosed) who have not been treated or treatment is incomplete should be kept in custody for the minimum time possible. individuals with tb who are immunocompromised are usually too ill to be detained; if they are, they should be considered at greater risk of transmitting disease to staff. any detainee with disease should be encouraged to cover his or her mouth and nose when coughing and sneezing. staff should wear gloves when in contact with the detainee and when handling clothing and bedding. any bedding should be bagged after use and laundered or incinerated. the cell should be deemed out of action until it has been ventilated and professionally decontaminated, although there is no hard evidence to support that there is a risk of transmission from this route (70). on march 14, 2003 , the who issued a global warning to health authorities about a new atypical pneumonia called sars. the earliest case was believed to have originated in the guandong province of china on november 16, 2002. the causative agent was identified as a new corona virus-sars-cov (80, 81) . by the end of june 2003, 8422 cases had been reported from 31 different countries, with a total of 916 deaths. approximately 92% of cases occurred in china (including hong kong, taiwan, and macao). the case fatality rate varied from less than 1% in people younger than 24 years, 6% in persons aged 25-44 years, 15% in those aged 44-64 years, and more than 50% in persons 65 years or older. on july 5, 2003, the who reported that the last human chain of transmission of sars had been broken and lifted the ban from all countries. however, it warned that everyone should remain vigilant, because a resurgence of sars is possible. their warning was well given because in december 2003, a new case of sars was detected in china. at the time of this writing, three more cases have been identified. knowledge about the epidemiology and ecology of sars-cov and the disease remains limited; however, the experience gained from the previous outbreak enabled the disease to be contained rapidly, which is reflected in the few cases reported since december 2003. there is still no specific treatment or preventative vaccine that has been developed. the incubation period is short, approx 3-6 days (maximum 10 days), and, despite the media frenzy surrounding the initial outbreak, sars is less infectious than influenza. the following clinical case definition of sars has been developed for public health purposes (82) . a person with a history of any combination of the following should be examined for sars: â�¢ fever (at least 38â°c); and â�¢ one of more symptoms of lower respiratory tract illness (cough, difficulty in breathing, or dyspnea); and â�¢ radiographic evidence of lung infiltrates consistent with pneumonia or respiratory distress syndrome or postmortem findings of these with no identifiable cause; and â�¢ no alternative diagnosis can fully explain the illness. laboratory tests have been developed that include detection of viral rna by pcr from nasopharyngeal secretions or stool samples, detection of antibodies by enzyme-linked immunosorbent assay or immunofluorescent antibody in the blood, and viral culture from clinical specimens. available information suggests that close contact via aerosol or infected droplets from an infected individual provide the highest risk of acquiring the disease. most cases occurred in hospital workers caring for an index case or his or her close family members. despite the re-emergence of sars, it is highly unlikely that a case will be encountered in the custodial setting in the near future. however, forensic physicians must remain alert for the sars symptoms and keep up-to-date with recent outbreaks. information can be obtained from the who on a daily basis from its web site. if sars is suspected, medical staff should wear gloves and a surgical mask when examining a suspected case; however, masks are not usually available in custody. anyone suspected of sars must be sent immediately to the hospital, and staff who have had prolonged close contact should be alerted as to the potential symptoms. the most consistent feature of diseases transmitted through the fecaloral route is diarrhea (see table 7 ). infective agents include bacteria, viruses, and protozoa. because the causes are numerous, it is beyond the remit of this chapter to cover them all. it is safest to treat all diarrhea as infectious, unless the detainee has a proven noninfectious cause (e.g., crohn's disease or ulcerative colitis). all staff should wear gloves when in contact with the detainee or when handling clothing and bedding, and contaminated articles should be laundered or incinerated. the cell should be professionally cleaned after use, paying particular attention to the toilet area. this viral hepatitis occurs worldwide, with variable prevalence. it is highest in countries where hygiene is poor and infection occurs year-round. in temperate climates, the peak incidence is in autumn and winter, but the trend is becoming less marked. all age groups are susceptible if they are nonimmune or have not been vaccinated. in developing countries, the disease occurs in early childhood, whereas the reverse is true in countries where the standard of living is higher. in the united kingdom, there has been a gradual decrease in the number of reported cases from 1990 to 2000 (83, 84) . this results from, in part, improved standards of living and the introduction of an effective vaccine. the highest incidence occurs in the 15-to 34-year-old age group. approximately 25% of people older than 40 years have natural immunity, leaving the remainder susceptible to infection (85) . small clusters occur from time to time, associated with a breakdown in hygiene. there is also an increasing incidence of hav in gay or bisexual men and their partners (86) . an unpublished study in london in 1996 showed a seroprevalence of 23% among gay men (young y et al., unpublished). the clinical picture ranges from asymptomatic infection through a spectrum to fulminant hepatitis. unlike hbv and hcv, hav does not persist or progress to chronic liver damage. infection in childhood is often mild or asymptomatic but in adults tends to be more severe. after an incubation period of 15-50 days (mean 28 days) symptomatic infection starts with the abrupt onset of jaundice anything from 2 days to 3 weeks after the anicteric phase. it lasts for approximately the same length of time and is often accompanied by a sudden onset of fever. hav infection can lead to hospital admission in all age groups but is more likely with increasing age as is the duration of stay. the overall mortality is less than 1%, but 15% of people will have a prolonged or relapsing illness within 6-9 months (cdc fact sheet). fulminant hepatitis occurs in less than 1% of people but is more likely to occur in individuals older than 65 years or in those with pre-existing liver disease. in patients who are hospitalized, case fatality ranges from 2% in 50-59 years olds to nearly 13% in those older than 70 years (84). the individual is most infectious in the 2 weeks before the onset of jaundice, when he or she is asymptomatic. this can make control of infection difficult because the disease is not recognized. the main route is fecal-oral through the ingestion of contaminated water and food. it can also be transmitted by personal contact, including homosexuals practicing anal intercourse and fellatio. there is a slight risk from blood transfusions if the donor is in the acute phase of infection. it should not be considered a risk from needlestick injuries unless clinical suspicion of hav is high. risk groups include homeless individuals, homosexuals, idus, travellers abroad who have not been vaccinated, patients with chronic liver disease and chronic infection with hbv and hcv, employees and residents in daycare centers and hostels, sewage workers, laboratory technicians, and those handling nonhuman primates. several large outbreaks have occurred among idus, some with an epidemiological link to prisons (87, 88) . transmission occurs during the viremic phase of the illness through sharing injecting equipment and via fecal-oral routes because of poor living conditions (89) . there have also been reports of hav being transmitted through drugs that have been carried in the rectum. a study in vancouver showed that 40% of idus had past infection of hav, and they also showed an increased prevalence among homosexual/bisuexual men (90). staff with disease should report to occupational health and stay off work until the end of the infective period. those in contact with disease (either through exposure at home or from an infected detainee) should receive prophylactic treatment as soon as possible (see subheading 8.3.7.). to minimize the risk of acquiring disease in custody, staff should wear gloves when dealing with the detainee and then wash their hands thoroughly. gloves should be disposed of only in the clinical waste bags. detainees with disease should be kept in custody for the minimum time possible. they should only be sent to the hospital if fulminant hepatitis is suspected. the cell should be quarantined after use and professionally cleaned. any bedding or clothing should be handled with gloves and laundered or incinerated according to local policy. detainees reporting contact with disease should be given prophylactic treatment as soon as possible (see subheading 8.3.7.). contacts of hav should receive hav vaccine (e.g., havrix monodose or avaxim) if they have not been previously immunized or had disease. human normal immunoglobulin (hnig), 500 mg, deep intramuscular in gluteal muscle should be used in the following circumstances: â�¢ has the detainee traveled to africa, south east asia, the indian subcontinent, central/south america, or the far east in the last 6-12 months? â�¢ ascertain whether he or she received any vaccinations before travel and, if so, which ones. â�¢ ask if he or she took malaria prophylaxis, what type, and whether he or she completed the course. â�¢ ask if he or she swam in any stagnant lakes during the trip. â�¢ if the answer to any of the above is yes, ask if he or she has experienced any of the following symptoms: a fever/hot or cold flushes/shivering. diarrhea â± abdominal cramps â± blood or slime in the stool. a rash. persistent headaches â± light sensitivity. nausea or vomiting. aching muscles/joints. a persistent cough (dry or productive) lasting at least 3 weeks. â�¢ take temperature. â�¢ check skin for signs of a rash and note nature and distribution. â�¢ check throat. â�¢ listen carefully to the lungs for signs of infection/consolidation. staff at higher risk of coming in to contact with hav should consider being vaccinated before exposure. two doses of vaccine given 6-12 months apart give at least 10 years of protection. there is no specific treatment for hav, except supportive measures and symptomatic treatment. although the chance of encountering a tropical disease in custo1dy is small, it is worth bearing in mind. it is not necessary for a forensic physician to be able to diagnose the specific disease but simply to recognize that the detainee/staff member is ill and whether he or she needs to be sent to the hospital (see tables 8-10) . this is best achieved by knowing the right questions to ask and carrying out the appropriate examination. tables 8-10 should be used as an aide to not missing some more unusual diseases. guidance for clinical health care workers: protection against infection with blood-borne viruses; recommendations of the expert advisory group on aids and the advisory group on hepatitis guidelines for hand hygiene in health care settings. recommendations of the healthcare infection control practices advisory committee and the hicpac/ shea/apic/idsa hand hygiene task force national model regulations for the control of workplace hazardous substances. commonwealth of australia, national occupational health and safety committee good practice guidelines for forensic medical examiners and the hospital infection control practices advisory committee. guideline for infection control in health care personnel report from the unlinked anonymous surveys steering group. department of health a strategy for infectious diseases-progress report. blood-borne and sexually transmitted viruses: hepatitis. department of health universal precautions for prevention of transmission of human immuno-deficiency virus, hepatitis b virus and other bloodborne pathogens in health-care settings risk factors for horizontal transmission of hepatitis b in a rural district in ghana familial clustering of hepatitis b infection: study of a family intrafamilial transmission of hepatitis b in the eastern anatolian region of turkey hepatitis b outbreak at glenochil prison during european network for hiv/aids and hepatitis prevention in prisons. second annual report. the network prevalence of hiv, hepatitis b and hepatitis c antibodies in prisoners in england and wales; a national survey the epidemiology of acute and chronic hepatitis c the role of the parenteral antischistosomal therapy in the spread of hepatitis c virus in egypt chronic hepatitis c: disease management. nih publication no. 03-4230. february department of health hepatitis c virus: eight years old laboratory surveillance of hepatitis c virus in england and wales: 1992-1996 last update aids/hiv quarterly surveillance tables provided by the phls aids centre (cdsc) and the scottish centre for infection and environmental health hiv and aids in the uk in 2001. communicable disease surveillance centre. an update mode of vertical transmission of hiv-1. a metanalysis of fifteen prospective cohort studies vertical transmission rate for hiv in the british isles estimated on surveillance data hiv post-exposure prophylaxis after sexual assault: the experience of a sexual assault service in london guidance from the uk chief medical officer's expert advisory group on aids. uk health department failures of zidovudine post exposure prophylaxis seroconversion to hiv-1 following a needlestick injury despite combination post-exposure prophylaxis department of health, immunisation against infectious disease. united kingdom: her majesty's stationery office chickenpox-disease predominantly affecting adults in rural west bengal prevention of varicella: recommendations of the advisory committee on immunization practices varicella-zoster virus epidemiology. a changing scene? use of acyclovir for varicella pneumonia during pregnancy outcome after maternal varicella infection in the first 20 weeks of pregnancy outcome in newborn babies given anti-varicella zoster immunoglobulin after perinatal maternal infection with varicella zoster virus varicella-zoster virus dna in human sensory ganglia epidemiology and natural history of herpes zoster and post herpetic neuralgia clinical applications for changepoint analysis of herpes zoster pain treatment of scabies with permethrin versus lindane and benzoyl benzoate treatment of ectoparasitic infections; review of the english-language literature nasal carriage of staphylococcus aureus: epidemiology and control measures centers for disease control and prevention. community-acquired methicillinresistant staphylococcus aureus infections-michigan methicillinresistant staphylococcus aureus, epidmiologic observations during a community acquired outbreak emergence of pvl-producing strains of staphylococcus aureus bacteriology of skin and soft tissue infections: comparison of infections in intravenous drug users and individuals with no history of intravenous drug use outbreak among drug users caused by a clonal strain of group a streptococcus. dispatchesemerging infectious diseases bacteriological skin and subcutaneous infections in injecting drug users-relevance for custody wound botulism associated with black tar heroin among injecting drug users isolation and identification of clostridium spp from infections associated with injection of drugs: experiences of a microbiological investigation team greater glasgow health board, scifh. unexplained illness among drug injectors in glasgow embolization of illicit needle fragments right ventricular needle embolus in an injecting drug user: the need for early removal departments of emergency medicine and pediatrics, lutheran general hospital of oak brook, advocate health system. emedicine-human bites prevention and treatment of dog bites human bites. department of plastic surgery guidelines for public health management of meningococcal diseases in the uk planning, registration and implementation of an immunisation campaign against meningococcal serogroup c disease in the uk: a success story efficacy of meningococcal serogroup c conjugate vaccine in teenagers and toddlers in england quadrivalent meningoimmunisation required for pilgrims to saudi arabia risk of laboratory-acquired meningococcal disease cluster of meningococcal disease in rugby match spectators immunisation against infectious disease. her majesty's stationery office ciprofloxacin as a chemoprophylactic agent for meningococcal diseaselow risk of anaphylactoid reactions joint formulary committee 2002-03. british national formulary notification of tuberculosis an updated code of practice for england and wales statutory notifications to the communicable disease surveillance centre. preliminary annual report on tuberculosis cases reported in england, wales, and the prevention and control of tuberculosis in the united kingdom: uk guidance on the prevention and control of transmission of 1. hiv-related tuberculosis 2. drug-resistant, including multiple drug-resistant, tuberculosis. department of health, scottish office control and prevention of tuberculosis in the united kingdom: code of practice epidemiology of tuberculosis in the united states nosocomial transmission of multi-drug resistant tuberculosis among hiv-infected persons-florida the continued threat of tuberculosis tuberculosis-a clinical handbook the white plague: down and out, or up and coming? a prospective study of the risk of tuberculosis among intravenous drug users with human immunodeficiency virus infection influence of tuberculosis on human immunodeficiency virus (hiv-1): enhanced cytokine expression and elevated b 2 -microglobulin in hiv-1 associated tuberculosis the chest roenterogram in pulmonary tuberculosis patients seropositive for human immunodeficiency virus type 1 coronavirus as a possible cause of severe acute respiratory syndrome epidemiological determinants of spread of causal agents of severe acute respiratory syndrome in hong kong alert, verification and public health management of sars in post-outbreak period age-specific antibody prevalence to hepatitis a in england: implications for disease control phls advisory committee on vaccination and immunisation. guidelines for the control of hepatitis a infection control of a community hepatitis a outbreak using hepatitis a vaccine seroprevalence of and risk factors for hepatitis a infection among young homosexual and bisexual men outbreaks of hepatitis a among illicit drug users identifying target groups for a potential vaccination program during a hepatitis a community outbreak multiple modes of hepatitis a transmission among metamphetamine users past infection with hepatitis a among vancouver street youth, injection drug users and men who have sex with men implications for vaccination programmes key: cord-264542-0hu5twhp authors: mueller, siguna title: facing the 2020 pandemic: what does cyberbiosecurity want us to know to safeguard the future? date: 2020-09-25 journal: biosaf health doi: 10.1016/j.bsheal.2020.09.007 sha: doc_id: 264542 cord_uid: 0hu5twhp as the entire world is under the grip of the coronavirus diseases 2019 (covid-19), and as many are eagerly trying to explain the origins of the virus and cause of the pandemic, it is imperative to place more attention on related potential biosafety risks. biology and biotechnology have changed dramatically during the last ten years or so. their reliance on digitization, automation, and their cyber-overlaps have created new vulnerabilities for unintended consequences and potentials for intended exploitation that are largely under-appreciated. herein, i summarize and elaborate on these new cyberbiosecurity challenges, (1) in terms of comprehending the evolving threat landscape and determining new risk potentials, (2) in developing adequate safeguarding measures, their validation and implementation, and (3) specific critical dangers and consequences, many of them unique to the life-sciences. drawing upon expertise shared by others as well as my previous work, this article aims to summarize and critically interpret the current situation of our bioeconomy. herein, the goal is not to attribute causative aspects of past biosafety or biosecurity events, but to highlight the fact that the bioeconomy harbors unique features that have to be more critically assessed for their potential to unintentionally cause harm to human health or environment, or to be re-tasked with an intention to cause harm. i conclude with recommendations that will need to be taken into consideration to help ensure converging and emerging biorisk challenges, in order to minimize vulnerabilities to the life-science enterprise, public health, and national security. ever since the coronavirus diseases 2019 (covid19) pandemic, (laboratory) biosafety and biosecurity concerns are even more rigorously scrutinized. this article uses the lens of the current pandemic to evaluate biological risks from biological research, particularly those that are amplified by the digitization of biological information and biotechnology automation. the cyberphysical nature of biotechnology has led to fascinating advances throughout the bioscience field. only recently, concerns have been raised regarding new risks that may lead to unintended consequences or unrecognized potentials for misuse. just as the emergence of the internet some decades ago led to a major revolution -which, by necessity was paralleled by the field of cybersecurity -we are now facing the era of cyber biosecurity 2 with its own security vulnerabilities. the dna synthesis industry has worked proactively for many years to ensure that synthesis is carried out securely and safely 3 . these efforts have been complemented by the growing desire and capability to resynthesize biological material using digital resources [1, 2] . yet, the convergence of technologies at the nexus of life and medical sciences, cyber, cyberphysical, supply chain and infrastructure systems [3] , has led to new security problems that have remained elusive to the majority of the scientific, agricultural, and health communities. it has only been during the last few years, that awareness of these new types of vulnerabilities is growing, especially related to the danger of intended manipulations. as these concerns have spawned the emergence of cyberbiosecurity as a new discipline, it is important to realize that its focus is not merely on traditional cyber-attacks (sect. 2 and fig. 1 below). due to the increased reliance of the bioscience fields on cyberphysical systems (cps, fig. 3 below), potentials for exploitation exist at each point where bioengineered or biomanufactured processes or services interface with the cyber and the physical domain, whereby attackers may exploit unsecured networks and remotely manipulate biologic data, exploit biologic agents, or affect physical processing involving biological materials, that result (whether intentionally or unintentionally) in unwanted or dangerous biological outcomes [4, 5, 6, 7] . great efforts have been put into place to rigorously assess the new risks and threats (see in particular [3] and the recent national academy of sciences, engineering, and medicine report "safeguarding the bioeconomy" [7, pp.204-211] ). nonetheless, cyberbiosecurity is still in its infancy. there is still limited expertise to fully characterize and assess the emerging cyberbio risks [8] , and it has been recognized that generic cyber and information security measures are insufficient [8, 9, 10, 11, 12, 13, 14] . triggered by the covid-19 pandemic, enormous amounts of resources have been devoted to identify its exact genesis. a goal of this article is to challenge this narrow focus by concentrating on the larger context of cyberbiosecurity, to illuminate serious new concerns for a wide audience. i will highlight distinct challenges and suggest specific steps to help support risk deterrence efforts. most broadly, cyberbiosecurity aims to identify and mitigate security risks fostered by the digitization of biology and biotechnology automation. fig. 1 gives a summary how this new paradigm evolved. while others, including the author, began to investigate these challenges almost a decade ago [15, 16, 17, 18, 19, 13] , the term cyberbiosecurity was first (informally) used in [20] . these authors warned of security issues resulting from the cyberphysical interface of the bioeconomy, as it was recognized that all biomanufacturing processes are in fact cps (see also incomplete awareness. during the last few years, the biotechnology industry has fallen prey to serious attacks (see e.g. [7, table 7 -1]), although there is no broad awareness of this. this important observation and the compelling need to question the "naive trust" throughout the life-science arena were key drivers to establish cyberbiosecurity as a new discipline [20] . additional sobering criminal cases that have affected the bioscience field are now emerging, even during the current pandemic (e.g. [10, 23, 24, 21, 25, 26] ). as noted in [23] , these encompass three critical areas of attack -sabotage, corporate espionage, and crime/extortion. yet, people in the life-sciences are largely ignorant of the dangers as they are barely trained in security issues -or not at all. research and healthcare industries are vulnerable to cyberbiosecurity attacks because they have not kept up with threats [27, 8] . capitalizing on a common misconception. generally, it is widely accepted that cybersecurity attacks and data breaches are a matter of when, not if. very recently, ransomware attacks have been recognized as "the primary threat" to healthcare organizations [28] . statements like these seem to support the understanding that cyberbio concerns in the bioeconomy could be dealt with by using it solutions alone (and possibly optimized for life-science demands). unfortunately, the reliance on cps generates unrecognized convergence issues. it is important to understand that due to cross-over effects, neither cyber nor physical security concepts alone are sufficient to protect a cps. "separate sets of vulnerabilities on the cyber and physical sides do not simply add up, they multiply" [29] . notably, cyber-attacks on critical automated (computer-based) processes (e.g., workflow or process controls) may lead to dire real-world consequences, similar to direct physical attacks. for instance, a 2008 explosion in the highly secure 1,099-mile baku-tbilisi-ceyhan pipeline was caused by computer sabotage. the main weapon for this cyberphysical act of terrorism was "a keyboard" [30, 29] . in general, the term "physical" in cps (fig. 3 , central box) is applied to the "engineering, physical and biological" [31] components of the system, or more generally, any components of the physical world which are connected through cyber elements. (e.g. the hazard analysis control point system for the fd+ag sector or, more generally, the infrastructure survey tool [36] or nist guidelines [37] ), it is recognized that fully scoping all the cyberbio risks, not to mention their relative likelihood and impact, is rather challenging [23, 22, 8] . although some of the cyberbio vulnerabilities share compelling similarities to the early days of the internet [38] , there are critical differences [9, 10, 11, 12, 14] . while most responders to the above mentioned survey of international experts [8] agreed that their organizations had "considered" cyberbio issues, some noted "insufficient time" or "no idea" how to address them, and all pinpointed the lack of available resources. this section describes some of the difficulties. the problem of identifying what needs to be protected: -many of the novel cyberbio risks and threats (table 1) have not been fully scoped. they are difficult to characterize, and envisioning the complete risk landscape continues to be a challenge [39, 8, 40, 14, 23] . -identifying and hierarchizing the extent, impact and severity of various (including, hypothetical) new vulnerabilities is difficult. -there is no comprehensive model to effectively capture, assess, and address the motivations, capabilities, and approaches of those who may cause harm (see also sect. 4.2). • how protection is achieved and enforced: -existing solutions from the cyber domain are only geared at specific aspects of biosecurity and cybersecurity but do not address the overlap and the issues arising from this convergence [8, 40, 14] . -due to variations in types of threats, targets and potential impacts, it is not straightforward to determine the applicability and effectiveness of a possible solution. -as "there is no one model" to secure the use of information systems across the bioeconomy [7] , weak or premature solutions may only help address a distinct problem but be misapplied in a different context, or even become a source for exploitation (sect. 4.2 and fig. 4 below). j o u r n a l p r e -p r o o f standards and guidelines [11, 22, 34] are a serious issue to achieve comprehensive and international protection. very recent publications and programs [33, 41, 7, 42, 43, 44, 45, 46, 47] ) undoubtedly have increased cyberbiosecurity awareness and large corporations will have been able to enhance their infrastructure. yet, the 2020 pandemic has shifted r & d priorities and budget and has hampered many efforts to better comprehend the new risks and to develop solutions. pharma and medtech professionals and companies are overwhelmed with covid-19 mitigation and crisis resolution while the industry sprints to develop new therapeutics and vaccines. on the other hand, the pandemic has led to a huge rise in cyber-attacks, with some reporting an 800% increase compared to pre-coronavirus levels [48] . as cybersecurity professionals are struggling to target this surge in cyber-crime, wfh (work from home) has impacted the ability of many cybersecurity professionals to support new business applications or initiatives [49] . as companies and organizations struggle to maintain stability and security, new research areas such as cyberbiosecurity have received inadequate attention and support. in addition to the known cyberbio challenges described above, the context of the bioscience fields leads to distinct problems that are not well understood. the context of the life-sciences involves unique concerns and unknowns. cyber-based attacks targeting the biological and medical sciences involve living entities, with networks of connections, combinatorial interactions and a dynamic range of outcomes. future and timed effects can be achieved by various technologies (e.g., non-volatile memory devices and electronic circuits). yet, with biotechnology products there is a decreased ability to control exposure [50] : they are often designed to be easily dispersed (e.g., with agricultural technologies directly in the field [51] ), reach high scalability [50] , can be delivered in different states (including water [52] ), and can be activated by simple environmental agents (temperature, light, wind [53, 54, 55] ). a critical issue with active biologicals is that they can be transferred by contact, ingestion, j o u r n a l p r e -p r o o f journal pre-proof or inhalation [50] . while concerns about unintended consequences and ill-intended applications of these and related technologies have been raised recently (see e.g., [50, 56, 57, 18, 33, 13, 7] ), types of biotechnologies that not merely have a cyber-overlap, but which constitute artificial systems themselves, have been even less assessed. these include artificially generated self-replicating systems [58] , artificial cells that mimic the ability of natural cells to communicate with bacteria [59], or artificially generated processes to interact with one another and initiate various signaling cascades [60] . the consequences of an ill-intended or accidental release of such systems into the environment are not understood. one of the most complex issues may be that "information" in the biological context is of a different kind than what is meant in the information sciences. identifying "biological information" is not always straightforward and may evade available technology from time to time: consider, for instance, the situation of recessive alleles of a gene. these can be phenotypically invisible over a huge proportion of a population and known for their frequency using tools such as the hardy weinberg equilibrium equation; as dna sequencing and synthesizing technologies developed over decades they could be detected and linked to individuals. while such invisibility features are of potential benefit in the area of steganography, [61] describes critical concerns that analogously apply to cyberbiosecurity. for instance, biological information can be stored and transmitted in a virtually undetectable way: "no x-ray, infra-red scanner, chemical assay or body search will provide any immediate evidence" of it [61] . further, biological media can survive much longer than anticipated [51] , which in this context leads to the worrisome situation that data (or biologic "information") can "literally run off on its own" [61] . notably, critical vulnerabilities also arise in the context of devices and mechanisms. among others, the above mentioned survey [8] identified "elevated or severe risk" potentials for an unauthorized actor to (1) take control of infrastructure (e.g., lab equipment, lab control systems, or even a fully automated robot lab), (2) interrupt the functioning of lab systems, or (3) circumventing security controls. the cyber-physical nature of biotechnology is one of the key concerns in cyberbiosecurity ( fig. 3 and table 1 ). with increased automation, dangers arise, for example, in the context of sterilization methods used in the healthcare and laboratory setting. for some methods, a very recent study [62] demonstrates that "integrity of released dna is not completely compromised," which is leading to the "danger of dissemination of dna and xenogenic elements across waterways." these findings were linked to temperature and time (e.g., journal pre-proof short microwave exposure times or short exposure time to glutaraldehyde treatment were least effective). parameters like these are both highly malleable and susceptible to manipulation, which will become an even bigger concern with "smart labs" of the future [21] . in the context of food and agricultural systems, cyberphysical interconnections lead to the danger of "[m]anipulation of critical automated (computer-based) processes (e.g., thermal processing time and temperature for food safety)" and "[l]ack of ability to perform vulnerability assessment" [34] . traditionally, the reliance on tacit knowledge and direct hands-on processes and applications has shielded the bioscience field from many forms of attack. beyond doubt, the digitization of biology and biotechnology automation are key drivers that enable the bioeconomy. nonetheless, these are creating yet a different type of risk than described above. the internet makes it easier to bypass our existing controls (be they personal intuitions, company procedures or even laws) [63] . we have evolved social and psychological tools over millions of years to help us deal with deception in face-to-face contexts. but when we lose both physical and human context (as in online communication), forgery and intrusion become more of a risk. it is now known that in the cyber fields "deception, of various kinds, is now the principal mechanism used to defeat online security" [63] . online frauds are often easier to do, and harder to stop, than similar real-world frauds. and according to [64] , "more and more crimes involve deception; as security engineering gets better, it"s easier to mislead people than to hack computers or hack through walls." while only recently recognized as one of the most important factors in security engineering [63] , the entire life-science enterprise is not adequately prepared for attacks that exploit psychology (social engineering attacks, table 2 ). at the same time, hackers are getting better at technology: "designers learn how to forestall the easier technical attacks..." [63] . thus, through various forms of fraud and deception, attackers may be able to circumvent many of the existing cyber-based safeguarding mechanisms and get direct access to their victim"s system. once they have entry to a target system, this may allow them to exploit not only the data and cyber side; it could also facilitate attacks on control and processes underlying various cyber-physical applications ( fig. 3) with consequences that directly affect biophysical components (fig. 4) . cyberbiosecurity is highly cross-disciplinary and will benefit from integrating existing capabilities and proven methodologies from a wide range of fields (e.g. security engineering, physical security and privacy, infrastructure resilience, and security psychology), with requirements from the life-science realm. as cyberbiosecurity may profit the most from lessons learned in the information security domains, this section focuses on this arena. several suggestions have been made to secure specific new cyberbio challenges via various cyber applications (e.g. [66, 12, 38, 14, 21, 5, 10] ). nonetheless, their practical realization is not always straightforward as even most basic information security notions still need to be better adapted to the bioscience framework (see e.g. [14, table 1 ]). similarly, it will be necessary to refine and extend the classic cia triad (which long has been the heart of information security), to extend the suggestions made previously (e.g. [14, fig. 3 ]), to optimally align them with the new demands. j o u r n a l p r e -p r o o f as argued (sect. 4.1), not all of the new problems can be linked to traditional cyber issues. thus, it will be important to distinguish which challenges could, or could not, be identified/safeguarded by existing cyber-approaches (or slight modifications thereof). to aid this distinction and develop a hierarchy of risk severity, it will be helpful to pinpoint the following. identify challenges to assure authenticity and integrity. the cyber-based interface to measure and assess a bioengineered product or service creates a gap, potentially allowing a range of vulnerabilities, from falsifiable entries of biological databases and sequence errors [38, 12] -which in a context like pathogens could lead to entry errors with rather disturbing effects -the intentional tampering of data related to forensics [67] , cyber-enabled attacks on systems monitoring water security [68] , to the actual exchange of the purported actual (cps produced) entity. the latter may enable the distribution of accidentally exchanged/counterfeit products such as plasmids [20] . which give rise to unique concerns where, e.g. some undeclared and "invisible" protein or nucleic acid in a suspended formulation contacts the stated product on release from the packaging or in the retail chain (see [50] ). "information" in the biological sciences [61] , the information life-cycle at large, logically-based game strategies, mechanisms for dual-use appropriation, end-to-end assessments, "routes to harm," context, and multiple exposure pathways [57, 13, 66, 40, 10, 35, 50] . identify the possibility of future and off-target effects. these are situations where clear predictions as required for various "if-then" paradigms employed in the cyber domains are inapplicable. deterrence measures will need to consider emerging actors and their pathways of action, including interactions between synthetic and natural entities, as well as mechanisms, vesicles and actions that can be activated by various physical and mechanical forces or combinations thereof [68, 50] . cyberbio efforts will benefit from the cps arena as these provide unique insights relative to "hardware" (incl. devices and systems) and "software" interdependencies. the cyber-interactions and the interconnectedness of such systems necessitate a drastic modification of previous security principles (see e.g., [29, 72] ). analogously, for cyberbio systems and mechanisms, it will be necessary to refine a list of security principles and goals, by incorporating cps lessons, to optimally align them with the bioscience fields. cyberbiosecurity is an evolving paradigm that points to new gaps and risks, fostered by the cyber-overlaps of modern biotechnologies. the enormous increase in computational capabilities, artificial intelligence, automation and use of engineering principles in the bioscience field have created a realm with a glaring gap of adequate controls. vulnerabilities exist within biomanufacturing, cyber-enabled laboratory instrumentation and patient-focused systems, "big data" generated from "omics" studies, and throughout the farm-to-table enterprise..." [39] . numerous security risks in the biological sciences and attack potentials based on psychology have not been adequately assessed, let alone captured. they will require completely new approaches towards their protection to avoid emergencies at the scale of covid-19 or more. yet, the current situation regarding cyberbiosecurity is sobering (fig. 5) . the private sector, small and moderate-sized companies, and the larger diy community itself are particularly vulnerable [7, 34, 11] . rather than spending enormous amounts of resources in looking back to identify the exact j o u r n a l p r e -p r o o f journal pre-proof genesis of sars-cov-2, cause of the pandemic, and the emphasized singularity of our current global situation, a concerted effort to better understand and mitigate the emerging cyberbio challenges faced by the entire bioeconomy sector should be a top priority. this paper summarizes existing critical issues that must be considered. it also suggests steps that can be leveraged to help assess and ensure that the many bioscience capabilities remain dependable in the face of malice, error, or mischance. the author confirms sole responsibility for the following: conceptualization, investigation, methodology, validation, visualization, writing -original draft, writing-reviewing and editing. the author declares there is no conflict of interest. i would like to thank the reviewers who provided expertise and comments that greatly improved this paper. • the attacker was able to view personal information including email addresses and phone numbers, which are displayed to some users of twitter"s internal support tools [73] . • these credentials can give them access to internal network tools and enable them to sabotage cyber-based controls of cps (figs. 3 and 4) . exploratory fact-finding scoping study on "digital sequence information" on genetic resources for food and agriculture., report on the exploratory fact-finding scoping study on comments of third world network on digital sequence information editorial: mapping the cyberbiosecurity enterprise cyberbiosecurity: an emerging new discipline to help safeguard the bioeconomy cyber-biosecurity risk perceptions in the biotech sector researchers are sounding the alarm on cyberbiosecurity national and transnational security implications of asymmetric access to and use of biological data the national security implications of cyberbiosecurity cyberbiosecurity challenges of pathogen genome databases the digitization of biology: understanding the new risks and implications for governance on dna signatures, their dual-use potential for gmo counterfeiting, and a second) dissertation, biomedical sciences a covert authentication and security solution for gmos a covert authentication and security solution for gmos point of view: a transatlantic perspective on 20 emerging issues in biological engineering the intelligent and connected bio-labs of the future cyberbiosecurity: from naive trust to risk awareness cyberbiosecurity implications for the laboratory of the future building capacity for cyberbiosecurity training cyberbiosecurity in advanced cyber safety us hospitals turn away patients as ransomware strikes bloomberg, hackers "without conscience" demand ransom from dozens of hospitals and labs working on coronavirus cybersecurity in healthcare: a systematic review of modern threats and trends institute for critical infrastructure technology, the cybersecurity think tank (nd overview of security and privacy in cyber-physical systems mysterious 08 turkey pipeline blast opened new cyberwar era adaptations of avian flu virus are a cause for concern cyberbiosecurity: a new perspective on protecting u.s. food and agricultural system cyberbiosecurity for biopharmaceutical products defending our public biological databases as a global critical infrastructure cyberbiosecurity: a call for cooperation in a new threat landscape are market gm plants an unrecognized platform for bioterrorism and biocrime? the australia group (nd) the nuclear threat initiative, biosecurity reducing biological risk and enhancing global biosecurity (nd) vbc launches biosecurity codes section national institutes of health, national science advisory board for biosecurity (nd blue ribbon study panel on biodefense (nd) top cyber security experts report: 4,000 cyber attacks a day since covid-19 pandemic the covid-19 pandemic and its impact on cybersecurity environmentally applied nucleic acids and proteins for purposes of engineering changes to genes and other genetic material agricultural research, or a new bioweapon system? plant-protecting rnai compositions comprising plant-protecting double-stranded rna adsorbed onto layered double hydroxide particles systems and methods for delivering nucleic acids to a plant methods and compositions for introducing nucleic acids into plants the next generation of insecticides: dsrna is stable as a foliar-applied insecticide the new alchemists: the risks of genetic modification, the new alchemists: the risks of genetic modification why gene editors like crispr/cas may be a game-changer for neuroweapons development of an artificial cell, from self-organization to computation and self-reproduction vesicle-based artificial cells as chemical microreactors with spatially segregated reaction pathways aims and methods of biosteganography anticipating xenogenic pollution at the source: impact of sterilizations on dna release from microbial cultures psychology and security resource page (nd) is confidence in the monitoring of ge foods justified? next steps for access to safe, secure dna synthesis identifying personal microbiomes using metagenomic codes perspectives on harmful algal blooms (habs) and the cyberbiosecurity of freshwater systems genetically modified seeds and plant propagating material in europe: potential routes of entrance and current status methods for data encoding in dna and genetically modified organism authentication, united states patent a reference model of information assurance & security an update on our security incident key: cord-011688-8g0p3vtm authors: wang, ting-ting; zhou, ming; hu, xue-feng; liu, jiang-qin title: perinatal risk factors for pulmonary hemorrhage in extremely low-birth-weight infants date: 2019-11-04 journal: world j pediatr doi: 10.1007/s12519-019-00322-7 sha: doc_id: 11688 cord_uid: 8g0p3vtm background: pulmonary hemorrhage (ph) is a life-threatening respiratory complication of extremely low-birth-weight infants (elbwis). however, the risk factors for ph are controversial. therefore, the purpose of this study was to analyze the perinatal risk factors and short-term outcomes of ph in elbwis. methods: this was a retrospective cohort study of live born infants who had birth weights that were less than 1000 g, lived for at least 12 hours, and did not have major congenital anomalies. a logistic regression model was established to analyze the risk factors associated with ph. results: there were 168 elbwis born during this period. a total of 160 infants were included, and 30 infants were diagnosed with ph. risk factors including gestational age, small for gestational age, intubation in the delivery room, surfactant in the delivery room, repeated use of surfactant, higher fio(2) during the first day, invasive ventilation during the first day and early onset sepsis (eos) were associated with the occurrence of ph by univariate analysis. in the logistic regression model, eos was found to be an independent risk factor for ph. the mortality and intraventricular hemorrhage rate of the group of elbwis with ph were significantly higher than those of the group of elbwis without ph. the rates of periventricular leukomalacia, moderate-to-severe bronchopulmonary dysplasia and severe retinopathy of prematurity, and the duration of the hospital stay were not significantly different between the ph and no-ph groups. conclusions: although ph did not extend hospital stay or increase the risk of bronchopulmonary dysplasia, it increased the mortality and intraventricular hemorrhage rate in elbwis. eos was the independent risk factor for ph in elbwis. pulmonary hemorrhage (ph) is a life-threatening respiratory complication of newborns [1] , especially in extremely lowbirth-weight infants (elbwis) who are vulnerable to conditions that require invasive ventilation and intensive care after birth. the incidence of clinical ph is estimated to be 1-12 per 1000 live births [2] , whereas the rate of ph in very-lowbirth-weight infants (vlbwis) varies from 4-12% [1] [2] [3] [4] . the variation in its incidence is mainly due to the unclear etiology and diagnostic criteria of ph. the pathophysiology of ph in newborns is hemorrhagic edema [1, 5] . the severity may vary from a mild, self-limited disorder to a massive, deteriorating and end-stage syndrome. it is associated with significant morbidity and high mortality. usually, infants with ph need aggressive positive pressure ventilation, high oxygen supplementation, critical circulatory support and blood transfusions. asphyxia, prematurity, intrauterine growth restriction, infection, hypoxia and coagulopathy are considered as perinatal risk factors for ph in many studies [1, 3, 6] . a few case reports have indicated that healthy term infants with ph are associated with inborn errors of metabolism. furthermore, risk factors associated with the care of preterm infants, including surfactant replacement, the management of patent ductus arteriosus (pda) and the fluid intake of ph, might be prominent in elbwis with ph [7] [8] [9] . however, the risk factors for ph in elbwis are controversial, and more studies are needed to further enhance the understanding of the pathophysiology 1 3 of ph in these extremely premature infants. therefore, the purpose of this study was to analyze the perinatal risk factors and short-term outcomes of ph in elbwis. this is a retrospective cohort study. infants were eligible for the analysis with birth weight less than 1000 g, living for at least 12 hours and no extreme cogenital anomalies at a hospital between january 1st, 2014 and december 31st, 2017. elbwis were excluded from the study if their parents decided to withdraw treatment of their newborns within the first 12 hours of life due to extreme prematurity. infants transferred to other children's hospitals due to cardiac, gastrointestinal or other abnormalities within the first week of life were also excluded. this study was approved by the ethics committee of the hospital. all medical records/information were anonymized and deidentified prior to analysis. all elbwis were resuscitated by a pediatric team led by an attending pediatric physician according to the management guidelines of elbwis. briefly, the elbwis were wrapped with plastic bags under a radiant warmer and with respiratory support by a t-piece resuscitator. a peep 5 cm h 2 o and/or pip 20 cm h 2 o was provided through a face mask immediately after birth. intubation and/or prophylactic surfactant replacement was provided at the discretion of the attending physician in the delivery room. oxygen supplementation was given and adjusted according to the target saturation on a pulse oximeter [10] . when the infants were transferred into the neonatal intensive care unit (nicu) and put on a ventilator or nasal continuous positive airway pressure (ncpap), a physician on duty at the nicu evaluated the respiratory severity and decided to extubate the infant to ncpap after giving surfactant if required. an umbilical venous catheter was inserted, and total parenteral nutrition (tpn) infusion was given. ph was defined as bright red blood secretion from the endotracheal tube that was associated with clinical deterioration, including increased ventilator support with a fraction of inspired oxygen (fio 2 ) increase of > 0.3 from the baseline [1] or an acute drop in hematocrit (> 10%) [4] , in addition to multilobular infiltrates on chest radiography. the record of ventilation of every infant was reviewed by two attending neonatologists independently and confirmed the diagnosis of ph. when a clinical diagnosis of ph was made, the infant was intubated and ventilated with high-frequency oscillatory ventilation (hfov). the ventilation parameters were adjusted appropriately according to the oxygen saturation, the results of arterial blood gas assessment and the chest x-ray. surfactant replacement was considered if necessary. the perinatal data of all infants and their mothers were collected by retrospective chart review that contained sex, gestational age (ga), birth weight (bw), small for gestational age (sga), apgar score at 1 and 5 minutes, delivery method, maternal age, prenatal infection, pregnancy hypertension, gestational diabetes (gdm), prenatal antibiotics and corticosteroids, cause of premature birth, cervical cerclage, surgery during pregnancy, and placental abruption. the short-term outcomes of the infants were also recorded. neonatal respiratory distress syndrome (nrds) and its severity were diagnosed by the neonatologists of the nicu based on the clinical profile and chest radiograph. early onset sepsis (eos) was defined as infectious diseases within 72 hours after birth as confirmed by blood culture. brain injury, including grade iii-iv intraventricular hemorrhage (ivh) and periventricular leukomalacia (pvl), was identified by serial head ultrasounds. bronchopulmonary dysplasia (bpd) was defined as the requirement for supplemental oxygen at 36 weeks postmenstrual age among infants who survived to nicu discharge. retinopathy of prematurity (rop) was screened by an ophthalmologist. echocardiography was performed between days 3 and 7 by a cardiologist and repeated as appropriate. the hemodynamically significant pda was managed by neonatologists, and ibuprofen was given to close the patent ductus. the treatment was withheld if there was identification of gastrointestinal bleeding or oliguria (urine output of less than 1 ml/kg/hour) according to the protocol for pda management in our nicu. the decision was made by the neonatologists to transfer the elbwi for surgical ligation if more than two courses of oral ibuprofen were given and the pda was still significant [11] . the data were analyzed with spss version 22.0. descriptive statistic analysis were used to describe the characteristics of mothers and infants. the normally distributed results are reported as the mean and standard deviation (sd); the remaining results are reported as the median, interquartile range (iqr) or percentage. chi-squared test, student's t-test and logistic regression model were used for statistical analysis. a total of 168 elbwis were born in this hospital and admitted to the nicu between january 1st, 2014, and december 31st, 2017. six infants were transferred to other hospitals for surgical diseases, and two infants died (they were identical twins who were born at 25 weeks and 5 days of ga; their birth weights were 840 g and 675 g, respectively). their parents withdrew care within 12 hours of life due to concerns about adverse long-term outcomes. among the 160 infants included in this study, 30 infants were diagnosed with ph (ph group), leading to an incidence of ph in these elbwis of 18.75%. the median age of infants with ph occurrence was 3 (iqr 2-4.5) days. one elbwi had ph occurred within 24 hours, on day 5, day 7 and day 12 after birth, respectively; 21 had ph that occurred on day 2-3 after birth, three on day 6 and two on day 11. the perinatal risk factors of ph are listed in table 1 and 2. the ga of the infants with ph was significantly lower than that of the non-ph infants. there were fewer sga infants in the ph group than no-ph group. because most cases of ph occurred within 3 days of life and the majority occurred in the first week of life, the average fluid intake within the first 3 and 7 days of life was also compared between the ph and no-ph group. unsurprisingly, the infants with ph were more likely to be intubated and treated with surfactant and oxygen supplementation. a multivariate analysis (including ga, sga, intubation in the delivery room, surfactant in the delivery room, repeated use of surfactant, higher fio 2 during the first day, invasive ventilation during the first day, and eos) was performed by using the logistic regression model, which found that eos was an independent risk factor for ph (table 3 ). the mortality of infants with ph was 43.3% (13/30), which was significantly higher than that of infants without ph (17.7%, 23/130). the rate of major ivh was higher in the ph group than that in the no-ph group. however, the rates of pvl, moderate-to-severe bpd, and severe rop were not significantly different between the ph and no-ph group (table 4) . among the 124 patients who were discharged home (17 in the ph group and 107 in the no-ph group), there were no significant differences of the duration of assisted ventilation, invasive mechanical ventilation, oxygen supplementation, hospital stay, or moderate-to-severe bpd between the two groups (table 5 ). in this study, we found that elbwis with ph were likely to be intubated and require surfactant therapy, invasive ventilation and oxygen supplementation, whereas the mortality and major ivh rates were also increased. logistic regression analysis showed that eos could increase the risk of the incidence of ph is significantly higher in elbwis than that in other neonatal populations, and the precise etiology remains unclear. a 10-year retrospective study has shown that the rate of ph in vlbwis is 4% [3] . another study reported that the rate of ph was approximately 8% in vlbwis but was 11-16.6% in elbwis [2, 9] . in our cohort, the rate of ph in elbwis was 18.8%. it has been shown that sga, eos, low birth weight (lbw), lower apgar scores at 1 and 5 minutes, severe rds and surfactant replacement are risk factors for ph [12] . usually, smaller gestational age and lower birth weight increase the odds of eos in preterm infants. ph may occur as a result of unstable hemodynamics and coagulopathy in elbwis with eos. it has been proven that delayed cord clamping reduces the risk of ph [13] . circulatory stabilization is the fundamental management strategy for elbwis and reduces the risk not only for pulmonary disease but also of mortality and ivh. many studies have shown that pda is associated with the occurrence of ph [6, 8, 14] . as a result of decreased pulmonary vascular resistance, left-to-right shunting through pda increases blood flow and the pressure state of the pulmonary vessels, which may compromise cardiac function with an increased risk of ph [5] . in our cohort, the rates of pda and requirement of treatment were higher in infants with ph than in those without, but the differences were not statistically significant. interestingly, the time of ph occurrence in our cohort was earlier than that of the development of hemodynamically significant pda [15] . the other reason might be the active management of pda in elbwis [16] . in our study, 71.4% of the infants with pda in the ph group and 65.8% in the no-ph group required oral ibuprofen or ligation. in addition, there was no significant gastrointestinal bleeding or oliguria observed in either the ph or no-ph group when ibuprofen was given, while the side effects of ibuprofen were fewer than those of indomethacin [17] . in addition, the overload of fluid intake within the first week was associated with pda and ph [18, 19] . polglase et al. [20] demonstrated that immediately after an intravenous volume overload, lambs had increases in pulmonary blood flow and the left ventricular ejection volume; 50% of them developed ph. the elevation in pulmonary capillary pressure can lead to alveolar capillary wall injury, causing pulmonary edema due to increased permeability with the passage of proteins [21] . in our study, the fluid intake of these elbwis was restricted to an average of 110-120 ml/kg/day to reduce the risk of bpd and hemodynamically significant pda [22] and showed no difference between infants with ph and those without ph. surfactant replacement is a standard treatment for rds. it has been shown that surfactant replacement increases the risk of ph [23] . in contrast, some studies have reported that the rates of ph are not different before or after surfactant replacement therapy [9] . it is reasonable to postulate that the infants who need surfactant are sicker and more likely to have ph than those who do not need surfactant. although an in vitro study showed that the presence of surfactant impaired coagulation function [24] , this finding was not proven clinically. on the other hand, infants with ph can be treated with surfactant because of the inhibition of surfactant function by blood. few retrospective and observational reports have demonstrated the benefits of surfactant on ph. however, the effect of this therapy remains to be established [25] . it seems that the chemical composition of different surfactant types affects the risk of ph [26] . infants given poractant alfa have a significantly higher rate of ph (21%) than infants treated with surfactant-ta (10%) [26] . however, the clinical risk index for babies scores were higher in infants treated with poractant alfa than in infants treated with poractant alpha. in our cohort, the infants with ph were similar to the infants without ph in terms of surfactant administration in the delivery room or nicu. however, the infants with ph needed multiple doses of surfactant. infants who were given surfactant prophylactically in the delivery room did not have an increased risk of ph. ph is a life-threatening condition of hemorrhagic pulmonary edema with high mortality. in our study, the mortality of elbwis with ph was 43% (vs. 18% in the no-ph group), similar to previous reports [9] . the rate of major intraventricular hemorrhage was significantly higher in the ph infants than in the non-ph infants (10% and 2%, respectively, p < 0.05). both ph and intraventricular hemorrhage are related to perinatal hemodynamic instability [13] . the effective management of ph includes positive pressure ventilation [4] , blood transfusion and circulation support. however, there were no significant differences in mechanical ventilation, oxygen supplementation, or hospital stay between surviving infants in the ph and no-ph groups, mainly because these factors, in addition to ph, are independently related to prematurity. this is a retrospective study in a single center of shanghai, which may not be able to highlight all the risk factors of ph in elbwis due to the limited data and small sample size. however, analyzing the risk factors of ph will help physicians to better understand why ph occurs and how to prevent it. in summary, ph is an adverse pathophysiological event of elbwis that occurs mostly within the first 72 hours of life. ph increases the risk of mortality and major intraventricular hemorrhage, and early onset sepsis is an independent risk factor for ph. funding no funding was received. ethical approval this study was approved by ethics committee of the shanghai first maternity and infant hospital, tongji university school of medicine. no financial or nonfinancial benefits have been received or will be received from any party related directly or indirectly to the subject of this article. open access this article is distributed under the terms of the creative commons attribution 4.0 international license (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. pulmonary hemorrhage clinical course and outcomes among very low-birth-weight infants prevalence, risk factors and outcomes associated with pulmonary hemorrhage in newborns pulmonary hemorrhage in very low-birthweight infants: risk factors and management short-term outcome of pulmonary hemorrhage in very-low-birthweight preterm infants improvement in mortality of very low birthweight infants and the changing pattern of neonatal mortality: the 50-year experience of one perinatal centre risk factors of pulmonary hemorrhage in very low birth weight infants: a two-year retrospective study early versus delayed neonatal administration of a synthetic surfactant: the judgment of osiris ductal shunting, high pulmonary blood flow, and pulmonary hemorrhage high-risk factors and clinical characteristics of massive pulmonary hemorrhage in infants with extremely low birth weight defining the reference range for oxygen saturation for infants after birth pda: to treat or not to treat pulmonary hemorrhage (ph) in extremely low birth weight (elbw) infants: successful treatment with surfactant circulatory management focusing on preventing intraventricular hemorrhage and pulmonary hemorrhage in preterm infants prevention and 18-month outcomes of serious pulmonary hemorrhage in extremely low birth weight infants: results from the trial of indomethacin prophylaxis in preterms intravenous paracetamol treatment in the management of patent ductus arteriosus in extremelylow birth weight infants failure of a repeat course of cyclooxygenase inhibitor to close a pda is a risk factor for developing chronic lung disease in elbw infants ibuprofen for the prevention of patent ductus arteriosus in preterm and/or low birth weight infants fluid regimens in the first week of life may increase risk of patent ductus arteriosus in extremely low birth weight infants risk factor profile of massive pulmonary haemorrhage in neonates: the impact on survival studied in a tertiary care centre cardiopulmonary haemodynamics in lambs during induced capillary leakage immediately after preterm birth stress failure of pulmonarycapillaries: role in lung and heart disease neonatal research network. association between fluid intake and weight loss during the first ten days of life and risk of bronchopulmonary dysplasia in extremely low birth weight infants comparison of two natural surfactants for pulmonary hemorrhage in very low-birthweight infants: a randomized controlled trial surfactant impairs coagulation in-vitro: a risk factor for pulmonary hemorrhage? surfactant for pulmonary haemorrhage in neonates efficacy of surfactant-ta, calfactant and poractant alfa for preterm infants with respiratory distress syndrome: a retrospective study acknowledgements we thank dr. po-yin cheung for his professional guidance for the preparation of this paper.author contributions ttw collected and analyzed the data, and drafted the manuscript. mz and xfh collected the data. jql designed the study. all authors approved the final version of the manuscript. key: cord-002906-llstohys authors: you, shu-han; chen, szu-chieh; liao, chung-min title: health-seeking behavior and transmission dynamics in the control of influenza infection among different age groups date: 2018-03-06 journal: infect drug resist doi: 10.2147/idr.s153797 sha: doc_id: 2906 cord_uid: llstohys background: it has been found that health-seeking behavior has a certain impact on influenza infection. however, behaviors with/without risk perception on the control of influenza transmission among age groups have not been well quantified. objectives: the purpose of this study was to assess to what extent, under scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. materials and methods: a behavior-influenza model was used to estimate the spread rate of age-specific risk perception in response to an influenza outbreak. a network-based information model was used to assess the effect of network-driven risk perception information transmission on influenza infection. a probabilistic risk model was used to assess the infection risk effect of risk perception with a health behavior change. results: the age-specific overlapping percentage was estimated to be 40%–43%, 55%–60%, and 19%–35% for child, teenage and adult, and elderly age groups, respectively. individuals perceive the preventive behavior to improve risk perception information transmission among teenage and adult and elderly age groups, but not in the child age group. the population with perceived health behaviors could not effectively decrease the percentage of infection risk in the child age group, whereas for the elderly age group, the percentage of decrease in infection risk was more significant, with a 97.5th percentile estimate of 97%. conclusion: the present integrated behavior-infection model can help health authorities in communicating health messages for an intertwined belief network in which health-seeking behavior plays a key role in controlling influenza infection. it has been found that health-seeking behavior has a certain impact on influenza infection. 1 therefore, to facilitate public health decisions about intervention and management in controlling the spread of infectious diseases, it is crucial to assess to what extent, under scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. 2 to control respiratory infectious diseases, the development of vaccination, contact tracing, isolation, and the promotion of protective behaviors are the important measures. indeed, the effectiveness of control measures fundamentally depends greatly on human beliefs on public infection awareness and risk perception for driving the change in self-behavior. 3 infection and drug resistance 2018:11 submit your manuscript | www.dovepress.com you et al risk perception can be referred to as an awareness or belief about the potential hazard and/or harm, which plays an important role in shaping health-related behaviors to reducing susceptibility and infectivity. 4 generally, risk perception is affected by social factors such as media release by health authorities, observation or interaction with relation-specific groups, past experiences of similar hazards, habits, and culture. 5 these factors result in variation in risk perception among individuals. epidemiological studies have found that variances in risk perception can be observed by examining the behavioral responses among different age groups. steelfisher et al 6 indicated that 60% of the adult population said that they did not intend to acquire the h1n1 vaccine for themselves. in addition, perception of vaccine safety and personal vulnerability were the major reasons for vaccine acceptance. allison et al 7 indicated that children could use accurate protective behavior; for example, they could use hand gel for preventing influenza. on the other hand, childhood vaccination is more likely to depend on parental decision making. moreover, researchers have suggested assessment of the risk perception and behavior across different age groups. 8, 9 a social network could be an important social structure in which people could exchange information about risk-related events to spur the health behavior change. 10 scherer and cho 11 suggested that individual perceptions could be affected by self-perception in the social network. researchers have explored the interactions between epidemic spreading and risk perception in the network. 12, 13 however, the influence of risk perception on the risk of infectious disease is controversial, because the perceptual capacity of individuals may both create and reduce the disease risks. therefore, the behavior-disease dynamics in the social network structure may result in amplification or attenuation of the disease outbreak. most epidemic modeling techniques have used a simple epidemic model such as the susceptible-infected-recovered (sir) model for describing a homogeneous disease network. moreover, the effects the network with human responses to disease spreading were studied extensively and attracted substantial attention. funk et al 12 used sirbased perceptual-influenza model for examining the effects of risk perception on behavioral change and susceptibility reduction. they also indicated that the effects within a disease network can induce health behavioral changes in the population. in turn, the influence of risk perception could result in a feedback signal to alter the progress of the disease dynamics. 12, 14 recently, information theoretical approaches have been applied to infer the relations in diseases or social networks. 15, 16 zhao et al 15 developed a model to quantify the effects of a dynamic network, indicating that the behavior responses correspond to the entropy derived from different information content of the dynamic social network. greenbaum et al 17 proposed an information theoretic model to assess pandemic risk. they indicated that mutual information was a key determinant in minimizing risk of the pandemic threats. we have previously incorporated the information-theoretic framework into a behavior-influenza (bi) transmission dynamic system to understand the effect of individual behavioral change on influenza epidemics. 18, 19 here we assess that if, how, and to what extent, under different scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. in this study, we analyzed the emergency admission rates from the weekly ili visits, which were obtained from taiwan centers for disease control (tcdc) by sentinel primary care physicians. the ili cases were detected through the real-time outbreak and taiwan national infectious disease statistics system. 21 the definition of an ili case must meet three criteria: 1) fever (ear temperature ≥37.8°c) and respiratory tract symptoms (including rhinorrhea, nasal congestion, sneezing, sore throat, cough, and dyspnea), 2) one of the symptoms of muscle ache, headache, and extreme fatigue; and 3) exclusion of simple running nose, tonsillitis, and bronchitis. data on emergency admission rates for six influenza seasons in the period from week 8 of 2007 to week 13 of 2013 were adopted to test how health-seeking behavior influences the influenza infection dynamics. influenza season was defined from july 1 (week 26) to june 30 (week 25) of the following year in taiwan. the ili-related emergency admission rates were detected by using the icd-9-cm codes for influenza and pneumonia (480-487). we also estimated the age-specific admission infection fraction (if) for each age group, including child (0-14 years), teenage and adult (15-64 years) , and elderly (65+ years), for different human behaviors or influenza risk perceptions. we multiplied the annual mid-year population estimates 20 health-seeking behavior in the control of influenza infection 100,000 and then divided the result by the number of ili visits to estimate if, which is given as 21 mid-year population incidence rate where i is the different age groups (child, teenage and adult, and elderly) and j is the yearly based time period in the period of 2007-2013. the concept of the bi model developed in our previous studies 18, 19 mainly incorporated the sir-based perception model 12 into an information-theoretic framework, which was used to simulate the information flow of risk perception in response to an influenza outbreak. briefly, the bi model uses six compartments to represent the disease states of susceptible, infected, and recovered by dividing the population into a with/without perception structure. 12 the description of input parameters for the bi model is given in table 1 . basic reproduction number (r 0 ) can be used to quantify disease infection severity, defined as the average number of secondary cases produced successfully by an infected individual in a totally susceptible population. 22 therefore, based on the bi model, we can also estimate r 0 with the perception state (r a 0 ) and without perception state (r d 0 ). it can be described as input source information with perception s a = r a 0 = a / λ , where a is the rate of perception spread and l is the rate of perception loss. on the other hand, input source information can be described without perception is the infection rate describing contact between infected and susceptible populations and g is the recovery rate from infected to recovered populations. 19 we assumed that r 0 can be treated as the basic reproduction number resulting from individuals with risk perception information (r 0,rpi ) for each age group in the period of 2007-2013. thus, r 0,rpi can be estimated as: 19 . furthermore, to better characterize perception spread rate for different age groups during each year (a ij ) on the bi transmission dynamics, we adopted a ij from an epidemic equilibrium structure. 12 the equilibrium information flow of risk perception from population without perception r d where r a 0e is the basic reproduction number at equilibrium with information flow of risk perception from population without perception, s i is the reduced infectivity factor from infected with perception to susceptible without perception, w is the rate of infected becoming with perception, a is the perception spread rate, and g w/o is the recovery rate of infected without perception. moreover, we assumed that people may make the decision to change behavior based on r 0,rpi in the previous year. based on equation 3, a can then be rewritten as; where r ij 0,pri, and r i j 0 1 ,pri, + ( ) are the basic reproduction numbers with/without risk perception, respectively, for each age group in the period of 2007-2013. to assess the effect of network-driven risk perception information transmission on influenza infection, we applied an information theoretic model referred to as the multiple access channel (mac) that is used to capture a signal r 0 transmitting through multiple channels to the responses i 1 , i 2 , …, i n . we considered the network-driven risk perception information model (nm) with information bottleneck (ib). 19 the maximum mutual risk perception information (mi max ) resulting from the nm can be estimated as; 23 where n e is the effective information from contact numbers of individuals, s r 0 2 is the variance of the r 0 signal distribution, s ib→i 2 is the variance introduced in each access channel through the ib to response i, and the s r 0 2 →ib is the variance introduced to the ib. the ratio s s s is the signal-to-noise ratio. 23 on the other hand, the nm model with a negative feedback was considered to explore the effect of perceived different health behaviors on reducing susceptibility. 19 here, we used the correlation coefficient (r) and the overlapping percentage (i o ) to associate r 0 and i from the published data (table s1 ) to calculate s s s in equation 5. we estimated the r based on the relationship between viral titer-based i and viral tier-based r 0 corresponding to with/ without perceived different health behaviors. briefly, we selected published papers (table s1) where health behaviors treated with vaccinations and antiviral drugs for different subtypes of influenza were included. two protective behaviors (i.e., perceived of carrying the disease to vaccine and antiviral taking) were adopted in a state of greater alert. the value of r can be used to associate the amount of observed variability that is attributable to the overall biological variability and experimental noise. on the other hand, i o describes the age-specific overlapping percentage between the infected population with/ without perception adjusted by the fractions of initial infected population with perceptual state over those without perceptual state. here, we used three perceptual scenarios to assess our model with the initial infected population ratios of <1, =1, and >1. the i o can then be estimated as algebraic manipulation of probability density functions (pdfs) of s a and s d as; 19 thus, followed by the information-theoretic theorem with known values of r, i o , and s r 0 2 , the s s can be computed as 1 2 23 therefore, the nm model with a negative feedback in equation 5 can be rewritten as; health-seeking behavior in the control of influenza infection where i represents the individuals perceived with/without health behaviors. we further incorporated the estimated probability distributions of the model parameter with age-specific initial population sizes in the period of 2007-2013 (table 2) and a into the bi model, to estimate the age-specific overlapping percentages. to parameterize the reduced susceptibility factor with regard to adopting preventive behaviors, including using masks, avoiding visiting crowded places, and hand washing 24 (s s,pre ), and protective behaviors of vaccination 26 (s s,pro ), we applied a standard logistic regression-based equation for mathematically expressing the components of the healthbehavior model (hbm). the hbm-based health behavior with the standard logistic regression-based equation has been applied to estimate with/without preventive/protective health behaviors in respiratory infectious diseases such as severe acute respiratory syndrome (sars), influenza, 24-26 and other diseases. [27] [28] [29] the estimates are equivalent to the decisions of rational individuals with influenza knowledge. here, s s can be expressed in terms of odds ratios (ors) depending on health behaviors perceived to be associated with each hbm variable. 26 where s s is the probability of the hbm-based health behaviors (such as preventive behavior, s s,pre , and protective behavior, s s,pro ) and x is a binary variable with a value of 1 indicating a "high" state and a value of 0 indicating a "low" state. or 0 is a calibration factor when all hbm variables are in a "low" state. s s represents that an individual engages in a particular behavior, and it could be calculated from equation 8 . s s ≥0.5 indicates that an individual engages in a specific health behavior. r 0 perception-based probabilistic risk assessment to develop a probabilistic risk model, a dose-response model describing the relationship between transmission potential quantified by signal r 0 and the total proportion of the infected population (i) has to be constructed. in a previous study, 18 we have successfully employed the joint probability distribution to assess the risk profile. it can be expressed mathematically as; where r(i) is the cumulative distribution function describing the probabilistic infection risk in a susceptible population at specific r 0 signal, p(r 0 ) is the probability distribution of r 0 signal (the prior probability), and p(i|r 0 ) is the conditional response distribution describing the dose-response relationship between i and r 0 . the exceedance risk profile can be obtained by 1 -r(i). in view of equation 2, we can relate p(i, r 0 ) to r(i) in equation 9 . thus, the mutual information in these interdependences between belief of risk perception and infection risk can then be expressed as a mechanism of interpersonal influence described in equation 10 . in table 1 , the numbers of ili visits were 2.4 × 10 4 ± 1 × 10 4 (mean ± standard deviation [sd]), 2.8 × 10 4 ± 1.1 × 10 4 , and 8.4 × 10 3 ± 1.7 × 10 3 per month for child, teenage and adult, and elderly age groups, respectively. figure 1 shows the ili-related emergency admission rates and if among the three age groups, child (0-14 years), teenage -2013) , the ili-related emergency admission rates were estimated to be 16.8 ± 7.3 (mean ± sd), 1.2 ± 0.8, and 3.5 ± 1.3 per 10,000 population for child, teenage and adult, and elderly age groups, respectively. overall, the ili-related emergency admission rate was the highest in the child age group (6.4-67.3 per 10,000 population, minimum-maximum), whereas the lowest one was observed in the teenage and adult age group (0.4-8.8 per 10,000 population; figure 1a ). on the other hand, the highest ili-related emergency admission if was in the child age group (0.7 ± 0.1%), followed by teenage and adult (0.2 ± 0.1%) and elderly (0.03 ± 0.01%) age groups ( figure 1b ). to model the bi model, the age-specific perception spread rate (a) has to be determined (equation 4). we first calculated age-specific r 0 , pri based on the age-specific ili-related admission if. our results indicated that lognormal (ln) distribution with a geometric mean (gm) and a geometric standard deviation (gsd), ln (gm, gsd), was the most suitable fitted model for r 0 , pri distributions of ln (1.78, 1.08), ln (1.14, 1.04), and ln (1.00, 1.01) for child, teenage and adult, and elderly, age groups, respectively (table 1 ). figure 2 demonstrates age-specific overlapping percentage (i o ) between infected population with/without perception adjusted by fractions of initial infected population with perceptual state over those without perceptual state. we used three different scenarios of initial infected population fraction: i + /i -< 1 (figure 2a-c) , i + /i -= 1 ( figure 2d-f) , and i + /i -> 1 ( figure 2g-i) . we showed that i + /i -> 1 results in the lowest estimates of i o in child and elderly age groups ( figure 2g and i), whereas for teenage and adult age groups, the estimate was the highest in the case of i + /i -= 1 ( figure 2e ). generally, i o estimates range from 40% to 43%, 55% to 60%, and 19% to 35% for child, teenage and adult, and elderly age groups, respectively ( figure 2 ). thus, we used i o based on justified initial infected population fraction to further examine the mi max among each age group. health-seeking behavior in the control of influenza infection generally, the individual with risk perceptual status is more likely to have the communicable belief among the population. in the case of effective information from contact numbers of individuals (n e = 1; figure 3a ), when i + /i -< 1 and i + /i -= 1, the mi max was <1 bit. on the other hand, when i + /i -> 1, as the perceptual information increased in the population, mi max was >1 bit, indicating that the network-based information reflected cooperativity. our results showed that mi max -n e profile featured a smooth shape ( figure 3b ). in the elderly group, as the strength of n e increased, mi max -n e profile experienced a nearly smooth curve. on the other hand, in the child and elderly groups, when n e was >6, mi max was >1 bit. our results indicated that mi max ranged from 0.4 to 1.2 bits, 0.5 to 1.4 bits, and 1.2 to 2.4 bits for child, teenage and adult, and elderly age groups, respectively, given n e ranging from 1 to 6 ( figure 3b ). to explore the impact of n e -varying perceived health behavior information on mi max , we estimated correlation coefficient (r) based on the relationship between viral titerbased i and viral tier-based r 0 corresponding to, with and without, perceived different health behaviors. the resulting estimates were r w = 0.7 and r w/o = 0.4 ( figure s1 ). we further used equation 7 to calculate mi max based on overlapping percentage (i o ) and r affected by n e . the results indicated that mi max ranged from 0.9 to 1.9 bits, 1.0 to 2.1 bits, and 1.2 to 2.4 bits for without control, and preventive, and protective behaviors in the child age group, respectively ( figure 4a ). for the teenage and adult age group, mi max ranged from 1.1 to 1.5 bits, 0.8 to 1.9 bits, and 3.5 to 4.8 bits for without control, and preventive, and protective behaviors, respectively ( figure 4b ). our results showed that individuals perceived the health behaviors to increase mi max among child, and teenage and adult, age groups ( figure 4a and b, respectively). our results also revealed that individuals perceived the preventive behavior to improve risk perception figure 4b) , and the elderly age group ( figure 4c ), but not in the child age group. our results indicated that there was 50% probability for exceeding the infected fraction of population (i); 0.73, 0.23, and 0.34 for child ( figure 5a ), teenage and adult ( figure 5b ), and elderly ( figure 5c ) age groups, respectively, in the condition of without perceived health behaviors. however, there was a 50% probability for reducing the infected fraction of population within the ranges of 0.004-0.20 for preventive behavior and 0.007-0.10 for protective behavior ( figure 5 ). the age-specific ∆mi with respect to with/without health behaviors was estimated based on equations 7 and 10. we found that, for instance, without any control information released, the median percentages of decrease in infection risk with ∆mis = 1 and 2 for elderly were 30 (1.5-83) and 57 (5.1-97), respectively, whereas the child age group had the lowest estimates of 5 (0.3-14) and 7 (0.3-79), respectively ( figure 6a ). on the other hand, ∆mi estimates at incremental mi changes with perceived health behaviors were 2%-3%, 10%-12%, and 26%-59% for child, teenage and adult, and elderly age groups, respectively ( figure 6b and c). the population with perceived health behaviors could not effectively decrease the percentage of infection risk in the child group, whereas for the elderly age group, the percentage of infection risk decreased more significantly with a 97.5th percentile estimate of 97% ( figure 6b and c). our results indicated that children may be preferable to adopt the protective behaviors. allison et al 7 indicated that the use of hand gel for hygiene was a feasible strategy in elementary schools to prevent influenza spread. our results implicate that children could use the accurate knowledge about the protective behavior to prevent influenza infection effectively. health-seeking behavior in the control of influenza infection our results found that the perceived protective behaviors enhanced the mi max in adults, whereas the perceived vaccination behavior might not. a meta-analysis of eligible studies also confirmed that raising risk perception from low to high would have a potential effect on vaccination behavior of adults. 30 we suggest that future studies should detect the differences among the health behaviors in adults. schneeberg et al 31 indicated that the vaccination rate for seasonal influenza was consistently low among the elderly population in canada. walter et al 32 indicated that the elderly population failed to obtain information about vaccine perception from the internet directly. it was also found that face mask wearing was easily performed by older adults in hong kong. 25 elderly people also appeared to be more active in conducting preventive measures. 33 in this article, we incorporated the probability-based hbm with regard to specific health behaviors into an sirbased epidemiological model. the hbm was used to examine individual's perceptual dimensions such as perceived susceptibility, severity, benefits, and barriers. however, the hbm has led to somewhat controversial issues in exploring the health behaviors such as for vaccination programs. 34, 35 the hbm presents a rational point of view that assumes the perceiver to be uninfluenced by the emotion, on describing the human you et al response to an epidemic. 9, 26 our study, however, establishes a more robust mechanistic framework on modeling the influence of network-driven risk perception on influenza infection. to our knowledge, we have conducted the first step on exploring the effects of risk perception in a population, on the spread of epidemics. we believe that our present methodology provides an innovative approach that integrates an epidemiology model with the information theory. we examined three scenarios for describing different agespecific populations to overcome susceptibility risk due to less accurate knowledge of influenza. we found that noise effect, which reflects as overlapping percentages about the uncertainty of accurate knowledge of influenza, can reduce risk perception information transfer on the network through the epidemic transmission. for example, previous studies found that participants had misconceptions between seasonal vaccine and pandemic strain. 36, 37 the effect of overlapping response may have resulted from the public health campaigns. for example, people were recommended to acquire the seasonal vaccine in pandemics. this may lead to a feedback mechanism between behavior change and disease dynamics. future work should carefully consider the effects of this noise on specific age groups. moreover, each intervention should be carefully investigated to the extent possible during an epidemic. this study has several limitations. the estimation of information flow of risk perception in age groups depended on the ili data. indeed, the human response to influenza varied with time and hence is not possible to detect in realtime situations. moreover, perceptual states in specific age groups may be affected by the severity levels of disease, the amount of accurate information about influenza, and other health-related leaflets. therefore, we suggest that the health authorities could reinforce health monitoring by using information technology and then linking it to the real-time epidemiological surveillance systems. a further limitation of our study is that we did not consider the influential factors on risk perception in an epidemic model. hence, future research should explicitly consider a number of additional influential factors on risk perception within an epidemic modeling, including disease prevalence, network effects, and government and media health messages. the findings of our study have an implication for public health. risk communication might be more effective if health authorities focus on a variety of information communication channels for conveying health behavior messages. moreover, our findings concerning perception of different health behaviors show substantial differences among age groups. we found that perceived protective behaviors (e.g., covering mouth, coughing hand washing) could reduce the infection risk for all age groups. this suggests that such crucial information for control measures would allow for targeting resources to designing and implementing the education plans concerning perception of healthy behaviors that are least perceived in the health behaviors. we developed an integrated mathematical model by incorporating the epidemiological transmission dynamics, the information flow of human responses, and an information theoretic model to assess the effects of network-driven risk perception on influenza infection risk. the simulated human responses with perceived health behaviors could decrease the risk of infection among different age groups. we demonstrated that the risk perception among populations changed with the effective information varying with the contact numbers of individual. we conclude that the present integrated bi model can help public health authorities on communicating health messages for an intertwined belief network in which health-seeking behavior plays a key role in controlling influenza infection. dynamic modeling of vaccinating behavior as a function of individual beliefs assessing vaccination sentiments with online social media: implications for infectious disease dynamics and control risk perceptions related to sars and avian influenza: theoretical foundations of current empirical research the perception of risk. london: routledge factors associated with increased risk perception of pandemic influenza in australia the public's response to the 2009 h1n1 influenza pandemic health-seeking behavior in the control of influenza infection feasibility of elementary school children's use of hand gel and facemasks during influenza season. influenza other respir viruses public knowledge, attitude and behavioural changes in an indian population during the influenza a (h1n1) outbreak perceived risk, anxiety, and behavioural responses of the general public during the early phase of the influenza a (h1n1) pandemic in the netherlands: results of three consecutive online surveys social contagion of risk perceptions in environmental management networks a social network contagion theory of risk perception endemic disease, awareness, and local behavioral response epidemic spreading and risk perception in multiplex networks: a self-organized percolation method social influence and the collective dynamics of opinion formation entropy of dynamical social networks measuring large-scale social networks with high resolution viral reassortment as an information exchange between viral segments assessing risk perception and behavioral responses to influenza epidemics: linking information theory to probabilistic risk modeling network information analysis reveals risk perception transmission in a behaviour-influenza dynamics system department of statics of ministry of the interior in taiwan statistical yearbook of interior taiwan national infectious disease statistics system infectious disease of humans: dynamics and control elements of information theory sars related preventive and risk behaviours practised by hong kong-mainland china cross border travellers during the outbreak of the sars epidemic in hong kong psychosocial factors influencing the practice of preventive behaviors against the severe acute respiratory syndrome among older chinese in hong kong incorporating individual health-protective decisions into disease transmission models: a mathematical framework predictors of cardiac rehabilitation initiation perceptions about hiv and condoms and consistent condom use among male clients of commercial sex workers in the philippines perceptions about preventing hepatocellular carcinoma among patients with chronic hepatitis in taiwan meta-analysis of the relationship between risk perception and health behavior: the example of vaccination knowledge, attitudes, beliefs and behaviours of older adults about pneumococcal immunization, a public health agency of canada/canadian institutes of health research influenza research network (pcirn) investigation risk perception and information-seeking behaviour during the 2009/10 influenza a (h1n1)pdm09 pandemic in germany monitoring of risk perceptions and correlates of precautionary behaviour related to human avian influenza during 2006-2007 in the netherlands: results of seven consecutive surveys factors affecting intention to receive and self-reported receipt of 2009 pandemic (h1n1) vaccine in hong kong: a longitudinal study vaccine perception among acceptors and non-acceptors in sokoto state public views of the uk media and government reaction to the 2009 swine flu pandemic a cross-sectional study of pandemic influenza health literacy and the effect of a public health campaign comparison of live, attenuated h1n1 and h3n2 cold-adapted and avian-human influenza a reassortant viruses and inactivated virus vaccine in adults use of the selective oral neuraminidase inhibitor oseltamivir to prevent influenza selection of influenza virus mutants in experimentally infected volunteers treated with oseltamivir efficacy and tolerability of the oral neuraminidase inhibitor peramivir in experimental human influenza: randomized, controlled trials for prophylaxis and treatment double-blind evaluation of oral ribavirin (virazole) in experimental influenza a virus infection in volunteers dose response of a/alaska/6/77 (h3n2) cold-adapted reassortant vaccine virus in adult volunteers: role of local antibody in resistance to infection with vaccine virus efficacy and safety of low dosage amantadine hydrochloride as prophylaxis for influenza a cold recombinant influenza b/texas/1/84 vaccine virus (crb 87): attenuation, immunogenicity, and efficacy against homotypic challenge evaluation of the infectivity, immunogenicity, and efficacy of live cold-adapted influenza b/ann arbor/1/86 reassortant virus vaccine in adult volunteers effects of the neuraminidase inhibitor zanamivir on otologic manifestations of experimental human influenza oral oseltamivir in human experimental influenza b infection the authors acknowledge the financial support of the ministry of science and technology, republic of china, under grant most 104-2221-e-002-030-my3. all authors contributed toward data analysis, drafting, and critically revising the paper and agree to be accountable for all aspects of the work. the authors report no conflicts of interest in this work. infection and drug resistance is an international, peer-reviewed openaccess journal that focuses on the optimal treatment of infection (bacterial, fungal and viral) and the development and institution of preventive strategies to minimize the development and spread of resistance. the journal is specifically concerned with the epidemiology of antibiotic resistance and the mechanisms of resistance development and diffusion in both hospitals and the community. the manuscript management system is completely online and includes a very quick and fair peerreview system, which is all easy to use. visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. health-seeking behavior in the control of influenza infection key: cord-022130-jckfzaf0 authors: walsh, patrick f. title: intelligence and stakeholders date: 2018-09-19 journal: intelligence, biosecurity and bioterrorism doi: 10.1057/978-1-137-51700-5_7 sha: doc_id: 22130 cord_uid: jckfzaf0 this chapter underscores the need for more explicit and strategic engagement of stakeholders (scientists, clinicians, first responders, amongst others) by the intelligence community. the chapter argues that the intelligence community will increasingly rely on their expertise to build more valid and reliable assessments of emerging bio-threats and risks. however, the discussion also identifies some of the limitations and challenges stakeholders themselves have to understanding complex threats and risks. agricultural scientists and veterinarians) can all be critical stakeholders for intelligence communities. without them it would be almost impossible to see how the ic alone can fulfil its mission to identify, prevent, disrupt and treat potential and emerging bio-threats and risks. indeed as seen in chapter 4 'the scientific community' brings a lot of expertise to the intelligence community about how to assess bio-threats and risks in a number of different ways and contexts. these include understanding potential risks through gof experiments, the development of biosensors and knowledge about weaponisation, pathogenicity and transmissibility of various bio-agents. chapter 4 also surveyed briefly the role of scientists working in epidemiology and forensics as providing central roles in the prevention, disruption and treatment of bio-threats and risks. additionally, chapter 5, highlighted the critical role the scientific community plays in helping the intelligence community better frame their understanding of potential threats and risks emerging from the fast paced changing biotechnology and synthetic biology sectors. this chapter provides a thematic analysis of how important stakeholders can contribute to reducing current and emerging bio-threats and risks. in contrast to chapter 6, which focused on what internally the intelligence community can do to better equip itself to manage bio-threats and risks, this chapter surveys what important external stakeholders can bring to the table to improve intelligence capability and to reduce bio-threats and risks themselves. paraphrasing research impact scholar mark reed's definition, i define a stakeholder of the intelligence community as any person, organisation or group that is affected by or can affect a decision, action or issue relevant to preventing, disrupting or treating bio-threats and risks (reed 2016: 41) . specifically, i am referring to stakeholders in the scientific, research, clinical, policy, first responder and private sectors that can provide capability, expertise to the intelligence community and/ or contribute to biosecurity through their own actions. in particular, the thematic analysis of the role of stakeholders in this chapter is organised around three sub-headings: prevention, disruption and treatment. traversing the literature and interviews with a select number of stakeholders shows there that there is a large and diverse number of individuals and organisations that could potentially play a role in either preventing, disrupting or treating future bio-threats and in the biological context, surveillance is the ongoing collection, analysis, and interpretation of data to help monitor for pathogens in plants, animals, and humans; food; and the environment. the general aim of surveillance is to help develop policy, guide mission priorities, and provide assurance of the prevention and control of disease. in recent years, as concerns about consequences of a catastrophic biological attack or emerging infectious diseases grew, the term bio surveillance became more common in relation to an array of threats to our national security. bio surveillance is concerned with two things: (1) reducing, as much as possible, the time it takes to recognize and characterize biological events with potentially catastrophic consequences and (2) providing situational awareness-that is, information that signals an event might be occurring, information about what those signals mean, and information about how events will likely unfold in the near future (gao 2011: 9). this definition highlights how the functions and roles of biosurveillance has changed from a more narrow concern of mapping disease in the public health sector to represent a diverse array of knowledge and capabilities that are vital in understanding bio-threats in the national security context. the definition also underscores the ongoing multiple challenges in improving bio-surveillance capabilities and their utility in the national security context. three key challenges in particular remain for improving national bio-surveillance capabilities and they are: methodological, information sharing and integration issues. the information sharing and integration issues have already been discussed in chapter 6 so this section will focus on the bio-surveillance methodology issues. by methodological issues, i am referring to both the technical methods (biosensors) and the broader different disciplinary approaches to biosurveillance that now inform debates amongst stakeholders on how to improve bio-surveillance capabilities. from a technical perspective, there has been a range of bio-sensor research from inside and outside the ic to detect the release of dangerous pathogens into the environment. perhaps the most well-known of these initiatives-biowatch was developed by dhs in 2003 with the aim to detect aerolised bio attacks for high risk bioagents in major us cities. the program however, has had mixed success relating to the reliability of results and the delay in the publication of these once samples were collected from the field (gao 2016 (gao , 2017 . the dhs tried to speed up the detection times from the first generation manual systems to gen 3 acquisitions, which promised speedier autonomous systems though testing difficulties remained. further analysis, however, of alternatives by the dhs as showing any advantages of an autonomous system over the current manual system were insufficient to justify the cost of a fully technology switch (gao 2016: 7) . in the us, research continues to improve the robustness, sensitivity, specificity, timeliness and cost of biosensor equipment. while conventional pcr based methods and immunoassay are still being used other biochemical, microbiological and genetic solutions are being trialled such as the incorporation of antibodies and peptide molecules, which may greatly reduce detection times to minutes instead of several hours (kim et al. 2015) . leaving aside efforts to improve aerolised biosensors, the expected rapid growth of synthetic biology and biotechnology and the potential (however unknown) that bioengineered material may be used maliciously in a way that threatens public safety or national security may shift the focus into other scientific research that can detect signals of bio-engineering including types of changes, location and possibly in the future where changes were made. in july 2017, iarpa commissioned a new program-finding engineering linked indicators (felix) to meet such objectives. iarpa is seeking interest from a range of scientists (synthetic biologists, micro biologist, immunologist, statisticians and computer scientists) to carry out 3-5 research projects addressing the two main focus points of felix (eaves 2017) . if this research can produce reliable results, it will provide another useful collection and analysis point for the ic by allowing the detection of previously undetectable signatures of bio-engineered material in bio-criminal and terrorism cases. in addition to the various technical innovations in biosensors, a range of other bio-surveillance methods have been deployed. in the late 1990s, the us cdc pioneered syndromic surveillance systems, which were initially aimed at improving the early warning of infectious diseases and bio-terrorism and have now evolved to include situational awareness (buehler et al. 2004) . similar syndromic surveillance systems have developed in other 'five eyes' countries such as the uk's real-time syndromic surveillance team (resst), which collects four national syndromic surveillance systems from several sources. additionally and more recently, the robert koch institute is creating an early warning system based on machine learning and natural language processing that will include 'appealing' interactive web applications and be linked to the german electronic reporting and information system demis (robert koch institute 2018). syndromic surveillance systems are a critical adjunct to traditional public health lab surveillance as they strive to provide real time or near real time collection, analysis and dissemination of health data to enable early identification and management of public health threats as they are not based on lab confirmed diagnoses-and assess a wider set of health related data including: clinical signs, absenteeism, pharmacy sales or animal health production collapse (buehler 2004) . a clear benefit of syndromic surveillance is it can be cheaper, faster and potentially more transparent then a state's public health lab surveillance system. however, as with the use of big volumes of data more broadly in the ic, data quantity, quality and structural variation all impact on the utility, accuracy and timeliness of some rapid epidemic intelligence from internet based surveillance methods (yan et al. 2017) . increasingly these syndromic surveillance systems rely on the use of big data, machine learning and analytics. additionally, web based epidemic detection systems like biocaster portal developed by the national institute of informatics in tokyo (collier 2015) and canada's global public health intelligence network (gphin) an event based surveillance system which looks at news feeds globally have also contributed to syndromic surveillance systems (mawudeku et al. 2015) . several event based internet surveillance systems have grown in number in the last decade. using pubmed, scopus and google scholar data bases, o'shea's study found 50 based internet systems all using different technology and data sources to gather data, process and disseminate it to detect infectious disease outbreaks (o'shea 2017). in line with the broader ic development of exploiting social media analytics discussed in chapter 4, in 2013 dhs piloted another approach to bio-surveillance. the pilot involved dhs trialling various social media analytics from self-reported information on facebook and twitter to determine pandemics and acts of terrorism given social media feeds can provide close to real time reporting of symptoms, sickness access to hospital or pharmaceuticals (insinna 2013) . additionally, other private companies have entered the biosurveillance space-providing novel methods for capturing bio-surveillance data. wilson's discussion of how a private company (veratect corporation) assessed signal recognition in global media reports to provide warning on the emergence of the 2009 h1n1 influenza pandemic shows how the ic warning culture methodology can be employed usefully along with what he described as the 'risk adverse forensically oriented response culture favoured by traditional public health practitioners' (wilson 2017: 1) . the veratect case shows that the private sector has a role in developing better bio-surveillance capability as well. as can be seen from the brief discussion above about different methodological approaches to bio-surveillance. there are also different views amongst bio-surveillance scholars and practitioners about the merits of each, particularly in their abilities to predict the 'next pandemic'. can for example, a national bio-surveillance system informed by one or more methods discussed above predict the emergence of the next pandemic or outbreak, particularly novel new viruses? some scientists argue that the prediction of a micro-evolutionary process of some biological agents such as a virus (i.e. a short term emergence or cross species transition) is incredibly difficult given evolutionary and epidemiological timescales are fundamentally different. geoghegan and holmes argue that instead it would be better to build surveillance capability that 'assesses the fault line of disease emergence at the human-animal interface, particularly those shaped by ecological disturbances' (2017: 7). others have argued differently. scientists working on the usaid funded predict and the global virome project examine disease hotspots globally in order to sequence (rather ambitiously) almost all the viruses in birds and mammals that could potentially spill over into humans. in particular, researchers working on the global virome project believe that prediction of which viruses might spill over from animal to human health is possible. geoghegan and holmes in response argue focusing on disease hotspots relies on very small amounts of data that can be unreliable given they are rare events. they give the example of saudi arabia which has not classically been a hotspot, yet mers recently jumped into humans from camels there. sequencing these viruses may provide useful evolutionary information, but geoghegan and holmes argue it won't necessarily provide early warning of what is going to affect us (geoghegan and holmes 2017) . other scientists are trying to change the ecology of disease, which presumably in some cases would make the early warning of some pandemics easier. in recent years, the scientific community has increasingly exploited crispr gene editing techniques to change the genetic makeup of malaria mosquitoes. additionally, advances in gene drives have recently been shown to change the ecological parameters of disease. gene drives are artificial 'selfish' genes that can force itself into 99% of an organism's offspring instead of the usual 50%. currently there is a global research effort funded by the gates foundation to cause female mosquitoes to become sterile within 11 generations or 1 year. the objective would be to release the genetically altered mosquitoes into malarial areas by 2029 (regalado 2016) . there are concerns by the fbi however that gene drives could be misused to create a 'designer plague' (ibid.). in addition to the 'predictability' challenges presented by various bio-surveillance methods, there are also differences in opinion amongst members of the bio-surveillance community about what an effective bio-surveillance system looks like. on what metrics can an 'effective bio-surveillance' system be evaluated given the multiple methodological approaches and systems that have developed for bio-surveillance? clinician and public health security specialist jim wilson has argued that the development of an effective global surveillance and response system is probably at least a decade or more away (wilson 2017: 222) . in the interim, we are left with multiple approaches of varying validity and reliability. so based on the current fragmented bio-surveillance efforts how do we learn the lessons that need to be learnt that will enable the implementation of the long awaited national bio-surveillance capabilities? how do we know if progress is being made to that goal? importantly, beyond national efforts, how do we assess the current capability of state, local agencies to contribute to a national bio-surveillance capabilities? where are the gaps and vulnerabilities in the current sub-national bio-surveillance and detection systems? (gao 2011) . compounding the current challenge of evaluating bio-surveillance capabilities in order to construct a viable national approach is that different bio-surveillance systems have been created for different end users (e.g. animal and human). the blue ribbon project report into animal health detailed information sharing challenges in animal health bio-surveillance and its integration with other bio-surveillance data including in human health (blue ribbon report 2017: 25) . this lack of integration makes it difficult to assess how information collected for animal or agricultural bio-surveillance could improve national approaches to bio-surveillance, particularly in scenarios where the emergence of disease could be an intentional or a malevolent act. different approaches to bio-surveillance have been informed by multi-disciplinary perspectives, which can be both a strength and weakness to developing a national perspective. current efforts across the 'five eyes' to develop fully national and integrated bio-surveillance capabilities remain works in progress and the political will to steward them into being seems insufficient. for example, in the us a program designed to provide a national bio-surveillance and integration system was eliminated in the president's budget request for fy 2018 (blue ribbon report 2017: 41). any evaluation of the effectiveness of various methods and approaches for building a national bio-surveillance capability also needs to consider how national efforts can both enhance and lever off global bio-surveillance capabilities. gaps and impediments in global biosurveillance have become increasingly evident to the world in the wake of the largest ebola epidemic ever-in which these challenges impacted the ability to prevent, detect, and respond. under the looming threat of mers-cov, leishmaniasis, influenza, multidrug-resistant tuberculosis, and plague, the global public health community now realizes the urgent need to address shortcomings in global bio-surveillance and the broader public health security system. properly preparing for the next major outbreak hinges on our willingness to transform global health surveillance systems and those of countries with fragile health infrastructures (shaikh et al. 2015: 183-186) . in some respects, similar challenges in developing national bio-surveillance capabilities exist in those at the global level including: siloed systems, inadequate training and technical expertise, different information and communication technology (ict) standards, concerns over data sharing and confidentiality, poor interoperability, and inadequate analytical approaches and tools. there is likely not one bio-surveillance method, technique or tool that is going to detect in real time disease outbreaks, particularly unusual ones which might imply malicious intent. a fully integrated approach to bio-surveillance may rely on more than one method or capability which together can provide reliable and valid bio-surveillance data and early warning at the national and global level. it may mean investigating ways that older legacy systems can be integrated or at least made interoperable with newer more mobile platforms such as mobile or wireless health technologies particularly in the developing world (shaikh et al. 2015) . it should be clear by now that improving bio-surveillance capabilities is essential to improving the prevention of natural and suspicious outbreaks of disease. it is important for the 'five eyes' intelligence and law enforcement communities to understand broadly the theoretical and practical developments in bio-surveillance so that they are able to more effectively lever relevant knowledge on bio-threats and risks. a second cluster of stakeholders that are useful in the prevention of bio-threats and risks (both natural and malicious) are those working in national, regional and global health. the ebola epidemic (2014) (2015) was a recent reminder of the consequences of weak public health capability and infrastructure in failing to prevent, identify and respond quickly to infectious disease. the ebola epidemic also had a catalytic effect on many public health authorities, practitioners and researcher's views about the capability of the traditional un response to global health crisis mainly coordinated through the who. many public health watchers are now arguing the need for a broader more effective focus-not just on prevention and response to infectious disease, but one that also included reframing the focus as a human security issue. adherents to this view make a compelling point when seen through the ebola case that continues to have significant impact on the economic and social stability of countries impacted (sparrow 2016; marston et al. 2017; who 2015; mmwr 2016) . beyond west africa, similar vulnerabilities in capabilities such as diseases surveillance, detection, contract tracing, clinical care, community engagement and communications exist globally as was also seen with the proliferation of zika in latin american/caribbean and mers in the middle east. in 2016, the commission on a global health risk framework for the future that met after the ebola crisis estimated 4.5 billion per year investment would be needed for better detection and response tools. the same commission report also estimated that the economic cost for global pandemics per year was $60 billion (schnirring 2016; dzau and sands 2016) . effective national bio-surveillance relies on not only what 'five eyes' countries can do to improve the scientific and technical capability of bio-surveillance, but also how they can improve bio-surveillance globally particularly in at risk areas. beyond effective bio-surveillance, effective prevention of pandemics whether natural, accidental or malicious relies on good global (multilateral), regional and national public health responses. there are several multilateral instruments, institutions and initiatives that are relevant, but i will focus here on what have become the key ones rather than attempting to traverse in detail all major international health initiatives struck since 9/11. they include who international health regulations (ihr), un security resolution 1540, the global health security agenda (ghsa), the biological weapons convention (bwc) and the australia group. the who international health regulations (2005) entered into force in june 2007 to prevent, protect against, control and provide a public health response to the international spread of diseases (detect, assess, notify events has a biosafety and biosecurity function) and includes all 192 members of the un. the ihr 2005 has improved accountability of countries about progress towards building national core public health capability targets in several areas including, but not limited to: surveillance systems, creating rapid response teams, border management. however, the ihr annual reporting process has been by self-assessment of core capacities to the world health assembly (wha) by all state parties, which has resulted in incomplete or not credible reporting for some member states. the commission on global health risk framework for the future also expressed concerns over the self-assessment monitoring tool of the ihr, because questions are binary (yes/no) answers and recommended that who devise a regular independent mechanism to evaluate country performance against benchmarks (ghrf commission 2015: 33). for example, a country can 'tick yes' for having a national public health legislation, but other dependent legislation (biosecurity, food safety, environmental health) may not be in place-thereby reducing overall the country's ability to manage health crisis or for the global community to understand and respond to capability and information gaps in that country (ibid.). some countries continue to be slow or uneven in their reporting of ihr (2005) attributes. in 2013, one study showed that the african region was well below global averages across all attributes measures with no african state reporting full implementation (kasolo et al. 2013: 11-13) . the second multilateral instrument relevant to our discussion here is the un security council resolution 1540 (2004 , which calls on all 192 states to prohibit non-state actors from developing, acquiring, manufacturing, possessing, transporting, transferring or using nuclear, chemical or biological weapons and their delivery systems. more importantly and specific to bio-threats only, the bwc has historically played the most significant role in preventing the weaponisation of biology. the bwc was established in 1972 and seeks to prohibit the development, production, acquisition, transfer, stockpiling and use of biological and toxin weapons (gerstein 2013; chevrier and spelling 2016: 331-356) . in 2001, there was an attempt by some member states to introduce a verification process, but this was vetoed by the us following inspection of soviet sites under the tripartite agreement between the soviet union, usa and the uk. the us arguing it could be difficult to certify that a state's biological program was merely defensive rather than offensive. the us also had concerns that inspection to labs could be disruptive or provide opportunity for industrial espionage against legitimately operating biotechnology companies (gerstein 2013: 137) . historically there has been a mixed record by some 'five eyes' intelligence countries in assessing verification and therefore noncompliance of the bwc. koblentz surveyed the role of intelligence (particularly humint) in assessing the former soviet union's offensive bio-weapons program between 1971 and 1990 which resulted in an incomplete picture of moscow's program (koblentz 2009: 157) . additionally, as discussed in chapter 2, in 2002 several 'five eyes' intelligence communities (us, uk and australia) incorrectly assessed that iraq had a mobile offensive bio-weapon capability. intelligence collection on its own can either over or under-estimate such capabilities. between 5 yearly review conferences, several initiatives and activities have been introduced (confidence building measure, meetings of experts, information exchanges) to improve the effectiveness and the implementation of the convention. however, state parties are only encouraged to implement relevant national legislation and other measures to prohibit prevent the development, production, stockpiling or transfer or use of bio weapons. how they precisely undertake measures is at the discretion of individual state parties. the bwc has been criticised for several reasons over the years. some of this is warranted, while other criticisms seem to not take into account that the bwc is different from its chemical and nuclear counter proliferation counterparts. as gerstein argues, 'material is the centre of gravity for nuclear discussions and intent being the center of gravity for biological issues' (gerstein 2013: 176) . developing nuclear weapons leaves a large recognizable footprint, whereas the development of an offensive biological weapon requires virtually no specialised equipment (ibid.). the first major criticism of the bwc is that it has no verification mechanism or any other mandatory provisions for monitoring compliance. a second complaint is that for many years (until 2006) , it lacked an implementation capability to help states fulfil their obligations. since 2006, the convention has had a small three team implementation support unit (isu) based in the united nations office for disarmament affairs in geneva which aims to 'assist, coordinate, and magnify the implementation efforts of the states parties to help states parties help themselves' (lennane 2011: 85) . in reality though, the isu does not have 'capacity for analysis and coordination other than for the collection of the annually submitted confidence building measures, posting them to the website and organising and attending conferences' (gerstein 2013: 173) . historically there has also been a low number of party states submitting their annual confidence building measures. although the bwc isu was able to report that a record number (81) annual confidence building measures were submitted in 2016, this only represented 45.5% of all 179 state parties submitting that year. though the trend line seems to be going up from a low in 2014 of 19 (bwc newsletter 2017: 3). a third criticism of the bwc is that it has moved slowly since inception and further questions remain about its relevance strategically and operationally in preventing bio-threats and risks into the future. such questions are likely fundamental to its long term viability. however despite shortcomings, the bwc has nonetheless created a normative institution for reducing the risk of biological or toxin weapons being used or developed by state and non-state actors (lennane 2011: 85) . more importantly, as developments in biotechnology continue at a pace, the bwc does provide a venue, where the security implications of dual-use technology can be assessed which will be critical in 'mitigating these emerging threats' (gerstein 2013: 175) . the bwc still does have an important role in reducing weaponisation of biology in the future, though its poor funding particularly of the isu means that other multi-lateral measures are needed to amplify the work of the convention. in addition to the above historic/traditional proliferation arrangements of the bwc, other international regimes have been implemented such as the australia group (established in 1985) and the proliferation security initiative (established in 2003). both have a broader counter proliferation objectives beyond biological weapons to chemical and nuclear. the australia group 41 member countries have collaborated on the development of lists of technologies and materials that could be used in the development of chemical and biological weapons. member countries then commit to monitor the export or transfer of these materials. the australia group maintains common control lists for dual use bio-equipment, technology, software, bio agents and plant and animal pathogens as the basis for promoting common standards and regulations (australia group common control list handbook 2015). the australia group works in concert with the bwc. the psi was a bush administration initiative that sought to supplement existing non-proliferation regimes, but seeks to enforce these by interdicting and seizing illegal weapons or missile technology in planes or ships carrying cargo. the psi also includes intelligence sharing and joint operational activity (national institute for public policy 2009). turning the focus slightly away from multi-lateral counter proliferation measures, other multilateral initiatives have focused on improving global health security. in some respects the ghsa provides a bridge between traditional, narrow security approaches to biological weapons and a wider securitisation of global health. the ghsa was established in 2014 by the obama administration and is a multi-sectoral approach to global health security seeking to include governments, international organisations and non-government organisations. ghsa was set up in part to 'advance further the ihr implementation through focused activities to strengthen core capacities and to ensure a world safe and secure from global health threats posed by infectious disease; where we can prevent or mitigate the impact of naturally occurring outbreak and intentional or accidental releases of dangerous pathogens' (heymann et al. 2015 (heymann et al. : 1889 . ghsa is a refreshing approach not only because it seeks to establish a global framework and capacity to assess, measure and sustain advances in global preparedness for epidemic threats, but it also addresses biosecurity as a public health priority-thereby linking public health and health security, development, defense and agricultural sector (cameron 2017) . the underlining logic of ghsa suggests that the same attributes needed to prevent, detect and respond to deliberate use of a bio agent are those required to manage a natural or accidental outbreak of a biological agent. ghsa also includes 12 technical targets aligned to three areas: prevention, detection and response (heymann et al. 2015 (heymann et al. : 1889 . like earlier initiatives, such as the us sponsored global health initiative (ghi), which was discontinued by the obama administration in 2012 due a lack of financial and technical authority to leverage and coordinate multiple us agencies-the ghsa will need to secure ongoing funding beyond 2019 from major donors including the us. at a november 2017 ghsa ministerial meeting in uganda, assembled governments signed onto an extension of the ghsa for another five years. us secretary tillerson had issued public support for continuing it, but at the time of writing no commitment by the us for future financial support (beyond fy 2019) has been made. ghsa holds promise, but in addition to ongoing funding challenges, those member states signed up to it will need to ensure effective governance is in place to align funding to global health priorities articulated by the who, world bank, imf and other donors in order to avoid duplication and promote an effective approach to international health security capabilities (paranjape and franz 2015; . in summary, this discussion of multilateral security and global health initiatives demonstrates that there is a diverse number of stakeholders working in these sectors, which can play a role in preventing biothreats and risks-whether they are natural pandemics or a malicious attack from a biological weapon. it's clear that the 'five eyes' intelligence communities have worked extensively with other member states in counter-proliferation institutions such as the bwc and the australia group for several decades, but what remains still under developed is how global health security stakeholders and intelligence communities can work more collaboratively for the mutual goal of global health security regardless of whether the risks are natural pandemics or result from a bio-terror attack or theft of a dangerous select agent from a lab. more trusting and formalised contact between both global health security stakeholders and those working in the security and intelligence communities can only be mutually beneficial to preventing major bio-threats and risks. the final cluster of stakeholders that can help prevent bio-threats and risks are of course those that specialise in biosafety and its promotion in their research institutes, biotechnology companies, universities and medical facilities. promoting biosafety in environments that work with select agents and other facilities that work with less dangerous material which can still cause harm relies on consistently high risk management practices. in all 'five eyes' countries there has historically been in place biosafety risk management procedures and practices to prevent accidental infection, accidental release, or intentional misuse of biological substances. however, as noted in chapter 2 in the last two decades the expansion in synthetic biology, biotechnology and biological science research has meant there are now more people working in more locations on dangerous pathogens-not just in well-regulated liberal democracies such as those in the 'five eyes' countries, but also in developing countries; where biosafety and biosecurity capabilities and practice may be less established such as parts of africa, the middle east, pakistan and former soviet states (gronvall et al. 2016; shinwari et al. 2014) . just in terms of the scale of this expansion of facilities working with dangerous pathogens-in the us alone, there is thought to be thousands of bsl 3 labs and in china the number of such labs is increasing too (nature editorial 2014: 443). the us and other 'five eyes' countries such as canada have invested in cooperative engagement programs since 9/11 in several former soviet union states. the us defense threat reduction agency (dtra) has lead efforts in georgia to reduce bio-risk by securing/consolidating pathogens, training scientists in biosafety and biosecurity technology, regulation and detection. likewise, the cdc has been involved in building public health capacity there as well as in armenia and azerbaijan (bakanidze et al. 2010: 7) . as important as building biosafety capacity is in developing countries, it is clear that much more still needs to be done to build biosafety capacity in 'five eyes' countries-including finding better ways to understand and manage comprehensively threats and risks in the biosciences environment. biosafety experts such as salerno and gaudioso argue for more comprehensive risk management systems across the global bioscience community 'to avoid an accident that jeopardizes the entire bioscience enterprise' (salerno and gaudioso 2015: xv) . their argument is that such a system would supplement existing national and international biosafety regulations by risk managing fully at an organisational and unit level every single potential incident rather than by generic risk hazard assessments that are currently done by most facilities today (ibid.: 201). others have also called for more systematic tools and approaches for managing biosafety incidents in labs dealing with particular dangerous pathogens such as marburg virus (dickmann et al. 2015) . still others have argued that while 'security awareness is high among employees who work with biological select agents and toxins, it is not pervasive across the entire life research community' (grphyon scientific 2016 : 1014 . such a statement does not seem to be hyperbole if one looks at some of the cases of biosafety and security lapses since 9/11 (gao 2009 (gao , 2013 . there have been several lapses at cdc between 2014 and 2016. in june 2014, dozens of workers in cdc could have been potentially exposed to live anthrax that hadn't been killed before being shipped from cdc's bioterrorism rapid response and advanced technology (brrat) bsl 3 to a bsl 2 lab in its bacterial special pathogens branch. cdc investigations determined that at least 67 cdc staff members may have been exposed to viable anthrax cells or spores though no illness or deaths occurred (cdc 2014). the same report found several breaches of biosafety process and procedure including failures of policy, training, supervision, judgement and even scientific knowledge (ibid.). similarly, biosafety lapses cases involving cdc labs occurred in january 2014 when an unintentional cross contamination strain of low pathogenic avian influenza a (h9n2) with a strain of highly pathogenic avian influenza a (h5n1) was shipped from cdc to the usda (schnirring 2014) . further biosafety breaches were detected in july 2014-this time at the national institute of health campus in bethesda maryland; where 6 viable smallpox vials were discovered improperly stored (dennis and sun 2014a ). an additional five improperly stored vials were also found at the nih-three were select agents (burkholderia pseudmomallei, francisella tularensis and yersinia pestis ) (dennis and sun 2014b). in the nih cases despite their age, they were still viable organisms which could have caused illness. their theft could have also posed a bio-threat and risk to the community. then after a hiatus where biological material was suspended being sent between bsl 3 and bsl 2 labs live transfers commenced again. after a further internal cdc review (cdc 2015a, b) some additional safety measures were put into place, however there was a subsequent lapse when a specimen of chikungunya virus was shipped from a high secure lab in fort collins to a lower level one which had not been killed (young 2015) . similarly, in 2015 the pentagon shipped live anthrax spores from the dugway proving ground in utah to 9 states and one international location that were also meant to have been killed (burns 2015) . it was later found that dugway and the us dod had been shipping nationally and internationally live anthrax for more than 10 years-often without adequate safeguards. other reports suggested that some samples were sent by federal express (sisk 2016) . similarly in november 2016, the us hhs discovered that a private lab had 'inadvertently sent a toxic form of ricin to one of its training centres multiple times since 2011 putting training staff at risk' (gao 2017: 1). similar biosafety lapses have occurred in the uk resulting in 75 investigations since 2010 of government, university and hospital labs (sample 2014) . as noted in chapter 2, one possible bio-threat and risk pathway could be the theft of biological substances or information from a biosciences institution. lapses in biosafety arrangements demonstrate, at least in some cases, biosecurity vulnerabilities that could make the theft or even infiltration of a threat actor into high containment lab easier. thefts from labs have occurred in the past by an insider, and a motivated insider can compromise biosafety for a range of reasons. bunn and sagan's edited book insider threats provides a useful taxonomy for thinking about 'insider threats' (bunn and sagan 2016). they can be: self-motivated insiders, who at some point decide to become a spy or thief. insiders can also be recruited insiders, who are already inside an organisation, but become convinced to become part of a plot. finally, an infiltrated insider might be associated with some adversary of the organisation and join it with the purpose of carrying out a malicious act against it. bunn and sagan also refer to inadvertent or non-malicious actors, who pose a threat by making mistakes without really intending to do so-such as leaving a password lying around. finally, the authors refer to a 'coerced insider', who remains loyal in intent, but knowingly assists in theft or sabotage to prevent hostile acts against themselves or their loved ones (ibid.: 4). the insider threat that was posed by bruce ivins' activities in a high containment lab (that resulted in amerithrax in 2001) demonstrates the potentially high threat and risks associated with an insider. the ivins case provides a useful case study in how an organisation's security procedures and other organisational and cognitive biases can miss for several years risks posed by an insider threat actor (stern and schouten 2016: 74-102) . since the amerithrax incident, significant investment has been made to close the biosafety vulnerabilities revealed by it. increasingly since 9/11 and amerithrax, a number of policies, procedures and normative behaviour have developed in the scientific community to promote biosafety and biosecurity. these have ranged from safety regulation codes such as the us biosafety in microbiological and biomedical laboratories (bmbl ) to more formal legislative and oversight regulations. the latter will be addressed in chapter 8. there are also technical and policy improvements that can be made in securing both physical and remote access to labs including computer systems that house data, which are at risk of theft or being hacked (gryphon scientific 2016 : 1014 berger 2013: 113-127; slayton et al. 2013: 51-70) . leaving aside discussion of some of the formal legislative and regulatory instruments for promoting biosafety, the development and maintenance of effective risk management across the biosciences also relies on an organisational culture that treats biosafety and biosafety as an equal priority to other deliverables. a culture of accountability at all levels must also exist if effective risk management can prevent, identify and treat bio-threats and risks promptly. a rogue insider threat, who may have been assessed as appropriate to work with select agents and seems initially to follow all the relevant biosafety regulations and procedures could still pose a risk if they have not embraced the organisation's normative cultural biosafety values. it is critical then in order to stop opportunities for insider threats, that the organisation promote relevant biosafety cultural values as much as and perhaps more than adherence to formal biosafety regulations. risk management measures must of course be measured against the ability of scientists to carry out its functions. effective engagement with local law enforcement and relevant domestic security intelligence organisations in each 'five eyes' country to help scientists build viable biosafety cultures will likely remain important in addition to internal organisation biosafety initiatives. stern and schouten provide a number of useful suggestions for improving policies and procedures that may help improve biosafety cultures across the biosciences enterprise (2016: 101-102). two that i think would be helpful are, one: developing standard operating procedures for proactively identifying vulnerabilities including using 'red team' exercises to explore how systems could become exploited. in other words, what motivators (financial, psychological, religious, and political) might drive an insider threat and are there ways to assess the signs of such an evolving threat? the other is to 'ensure personnel reliability programs incorporate ongoing assessments of counterintelligence vulnerabilities, including vulnerabilities to self-ascribed whistle-blowers or attention seekers' (ibid.: 101). effective biosafety and biosecurity training is also crucial as the number of labs working with select agents or other dual use bio-agents proliferate globally, particularly in locations with fragile states. more consistent approaches to training will also be important so nations can be confident that as many scientists as possible regardless of the country or the context in which they work understand what bio-risks and threats may emerge and how to prevent or mitigate against them (sture et al. 2012 ). as discussed above there are multiple stakeholders in the scientific community, global health security and biosafety fields that can play a critical role themselves in preventing bio-threats and risks as well as supporting the operational efforts of the intelligence community to prevent these. while prevention of bio-threats and risks is one critical dimension that stakeholders can play central roles another is disruption. although the intelligence community can use a range of knowledge, technologies and methodologies from stakeholders in the scientific community, to prevent bio-threats and risks, we have to accept that it will not be possible to detect every criminal or terrorist act. nonetheless, some of the techniques, practices, technologies and knowledge available from stakeholders in the scientific community will still be useful to disrupting bio-threats and risks. in other words prevention may not always be possible yet measures can be put into placewhich can detect threats early enough to reduce their impact. similar to preventing bio-threats and risks, disrupting them will also rely on seeking advice from stakeholders involved in bio-surveillance, public health and biosafety research, amongst others on disrupting them as well. for example, as discussed earlier iarpa's commissioning of research into detecting signals of bioengineering changes (felix) may result in better capability for the intelligence community in not only preventing bioengineering changes that make it easier for terrorists to carry out attacks on populations, critical infrastructure or biotechnology companies, it could also help detect and disrupt the planning stages for such attacks. additionally as noted earlier, if a high containment lab has a strong biosafety culture it is more likely that disruption of a biothreat may be possible just by colleagues speaking up about suspicious activities in their working environment rather than any elaborate disruption knowledge and techniques, procedures the intelligence community might have in place to disrupt such threats. but knowledge, technologies, techniques and practice for disruption of bio-threats and risks cannot just come from scientific stakeholders in the biosciences, it should also come from other fields and practitioners working in other areas where successful disruption operations has taken place. these areas include criminology, policing, engineering, legislation, cyber, counter-intelligence amongst others. in this section, we examine briefly what other stakeholders and discipline perspectives might the intelligence community learn from that can provide better capabilities for the disruption of bio-threats and risks. are there lessons to be learnt from other stakeholders, disciplines or even other threat contexts that might be relevant to disrupting biothreats that might not have been initially detected? since 9/11, there are three stakeholder and discipline groups, which are investigating and applying disruption strategies to threats and risks and their knowledge might be relevant in disrupting threats and risks in the bio context. these are criminology, counter-terrorism and cyber. we will explore each briefly to see how stakeholders (researchers and practitioners) have developed disruption strategies in each and how they might be employed against bio-threats and risks. insights from criminology and the practical application of disruption for crime prevention has provided a supplementary approach to traditional law enforcement approaches of prosecution against certain crimes through the courts. disruption is not a new concept in criminology and law enforcement practice, though it can be difficult to define in all law enforcement contexts (ratcliffe 2008: 204) . its meaning at least in the criminology/policing/law enforcement contexts can partly be traced back to broader desires-initially by uk law enforcement followed later by other 'five eyes' countries in the late 1990s and early 2000s to move law enforcement away from its traditional reactive mode to offending to one driven by intelligence. this concept of law enforcement or policing being intelligence driven or led gained significant traction in the criminology and policing literature (walsh 2011; ratcliffe 2016; innes and sheptycki 2004) . it was driven initially in the uk by the desire for governments to maximise efficiencies and reducing costs by increasing the use of intelligence to drive strategic and operational decision-making. the implementation of intelligence led policing models into operational policing across 'five eyes' countries has had mixed results partly due to cultural, financial and leadership issues in agencies that have attempted to put intelligence at the centre of strategic and operational decision making in policing (walsh 2011; ratcliffe 2016) . nonetheless, despite historical challenges in adopting intelligence led approaches, increasing fiscal constraints and the ever increasing demands on law enforcement in managing both high volume crimes and complex operating environments in counter-terrorism, cyber and organised crime meant, at least in many national law enforcement agencies; a greater demand for an intelligence driven approach (walsh 2011) . this intelligence driven approach, which promulgated proactive disruption of crime strategies was in part an admission that not all crime could be prevented or the offenders prosecuted. additionally, in many law enforcement agencies such as the australian federal police (afp), the growing volumes of information collected have given intelligence a more central role in triaging the significance of information, value adding to it and guiding investigators to targets and operations that are high priority; or have the greater likelihood of successful prosecution outcomes. in complex organised crime cases such as transnational drug trafficking, people smuggling and even terrorism and cyber threats, which we discuss shortly-intelligence driven disruption strategies have become increasingly popular for many 'five eyes' law enforcement agencies. this has particularly been the case where it can be difficult to dismantle completely the organised crime group-or to even know the full extent of the group's network. disruption operations that attempt to take down threat actors with key roles (e.g. facilitator, financier, and logistics) may nonetheless reduce the threat posed by the organised crime network even if the network continues to exist. additionally, with some organised crime networks, it may be difficult to secure sufficient evidence for prosecution against a more serious offence such as drug importation, but there may be sufficient intelligence that can be used to make the criminal environment more hostile for the group's illicit enterprise by arresting key group members for lesser offenses such as unexplained wealth or migration irregularities. while disruption of crime does seem like a useful tool in preventing or reducing the impact of offenders, the criminology literature demonstrates it has been difficult to evaluate the effectiveness of intelligence driven disruption strategies. ratcliffe cited an rcmp disruption attributes tool, which attempts to examine where the disruption activity is aimed at (core business, financial, personnel) and whether the kind of disruption for one or more of these attributes is high, medium or low in impact (ratcliffe 2008: 207) . however, such tools are largely subjective and qualitative-making it difficult to accurately measure the impact of intelligence driven disruption measures. the other concern about disruption strategies is that they may just cause displacement, where other criminal enterprises take the place of those removed by law enforcement or as innes suggest, 'disrupting a network may just provide a vacuum for more dangerous offenders to step in' (innes and sheptycki 2004: 14) . finally, the literature suggest that employing effective disruption strategies rely on proactive collection and valid analysis that can led to both timely strategic and operational outcomes that in turn result in threat mitigation and harm minimisation. so are there benefits for the intelligence community working on bio-threats and risks to investigating research and practice for disrupting threats in the organised crime context? the answer is a qualified 'yes'. much of course depends on the nature of the threat and risk posed. clearly as with any crime, it is hard to disrupt a bio-threat, when it's still in the head of the offender. however, we do know that criminal and terrorist acts don't just happen spontaneously. there usually involve predicate steps taken by the offender. some of these might happen in very compressed periods while in other offences planning may take years. either way, and regardless of whether these can be detected by the intelligence community, there is likely to be some signs in the predicate planning stages of an impending threat/risk that can provide the intelligence community opportunities for disruption. it is difficult to say in which bio-threat cases disruption strategies will be most successful. much will depend on how quickly the intelligence community can collect and analyse information that may be indicative of an evolving bio-threat and risk. as discussed previously, good collection and analysis is contingent on having robust core intelligence processes in place and more importantly effective intelligence governance. both are needed to ensure intelligence efforts are coordinated across multiple internal intelligence community stakeholders, with relevant knowledge-as well as ensuring information and expertise from external stakeholders (the scientific community) is available to provide earlier warning signs of an emerging bio-threat. while it is important not to over-play the potential for success of the kind of disruption strategies used against traditional organised crime groups, there are likely bio-threat scenarios where disruption strategies may make a difference. arguably, disruption of bio-threats could be on a continuum with the individual threat actor on one end and a sophisticated organised group on the other. at the individual level one could have the scenario of a lone terrorist actor or a mad/bad scientist. while it may seem difficult to get early warning of the malicious act of mad/bad scientist, we saw in the earlier discussion on 'insider threats' that it may be possible to disrupt their activity before you reach an amerithrax style attack. twenty/twenty is hindsight with the bruce ivins amerithrax case, but the lessons learnt from this incident do provide guidance on the sources of collection and analysis required from within the intelligence and scientific communities to aid the disruption of this kind of bio-threat. it does not mean that all similar cases of 'insider threats' will be detected, prevented or disrupted, but a more careful collection and analysis of 'odd' behaviour or unusual security lapses by a scientist working in a high containment lab could reveal areas of vulnerabilities. detection both of abnormal changes to an individual's psychological profile and/or in their working environment can provide opportunities for those vulnerabilities to be disrupted. at the other end of the bio-threat scale, a more organised bio-criminal or terrorist planned event may resemble in some respects other illicit criminal markets and networks (drugs, identity fraud, money laundering) and thereby present opportunities for disruption. again this is not to suggest that disruption of organised bio-threat scenarios will be always be possible. as discussed in earlier chapters, since 9/11, even with state based wmd programs the intelligence community has had a mixed record in detecting them and uncovering the intention and capability of non-state actors to exploit dual use technology for malicious end remains difficult. however, disruption could be useful in some bio-crimes where there is a bigger network of actors involved in the illicit business. for example, in crime scenarios where food suppliers are not registered legally to import food into a 'five eyes' country because it poses a biosecurity risk, there may be opportunities for parts of the intelligence community (particularly national law enforcement agencies) to work with agriculture, animal health, food regulatory agencies and relevant scientific stakeholders to disrupt illicit food suppliers from a country of concern. equally there may be opportunities for disruption of activity from non-compliant biotechnology providers in a 'five eyes' country, who provide dual use equipment to a company overseas with a questionable profile that resides in a country vulnerable for terrorist infiltration. in addition to useful knowledge that can be gained from criminology and law enforcement practice there are also perspectives on disruption from contemporary counter terrorism studies that may have utility in the bio-threat and risk context. as noted above, since 9/11 law enforcement agencies across the 'five eyes' countries have been increasingly deploying disruption strategies in countering terrorism given the preservation of life demands an earlier interception of attacks preferably at the planning stage. as innes suggest in the case of counter terrorism operations, one aim is to overtly disrupt planned attacks, which has many effects including sending a message to other terrorist groups that they may be next, reassuring the community and if possible deploying countering violent extremism (cve) strategies in communities where future attacks may arise (innes et al. 2017: 253) . in the uk in particular, a key plank in its counter terrorism strategy has been disruption both at the strategic and tactical level. at the strategic level, disruption has involved a number of initiatives from arresting persons of interest, legislative action and enhanced surveillance (innes et al. 2017: 265) . in addition to global influence of groups such as al qaeda and islamic state, the growth in lone actor attacks-some 198 across the us and european countries from 1970s to late 2000s (danzell and montanez 2016: 136) has also been a significant catalyst for enacting further stringent legislative measures such as detention without trial and control orders (walsh 2016 ). all 'five eyes' countries have also adopted further legislative changes that allow disruption of terrorist attacks by reducing thresholds law enforcement and intelligence agencies need for reasonable suspicion in order to access both electronic and human intelligence (humint). governments desire to do something to reduce the threat and risks posed by terrorists by creating increasingly proactive, flexible and permissive legislative environments has also raised concerns about the role of intelligence, secrecy and privacy. these issues will be discussed as they relate to the bio-threat and risk context in chapter 8. but legislation is only one plank in effective counter terrorism and the scale and pace of actual and potential terrorist attacks suggest other disruption strategies are required at the tactical level. innes et al. suggest such strategies might include: 'prosecution against an individual or a network for offences other than those they were principally being investigated for and/or interfering with the operations of the criminal enterprise in cases where there is insufficient evidence to secure prosecution ' (2017: 265) . they add that, at the tactical level, disruption strategies can 'interfere with the ability of suspected adversaries to operate effectively and efficiently' (ibid.). innes et al. suggests that tactical disruption functions at 'near event interdiction', which can mitigate or minimise harms associated with the actual or planned terrorism attack (ibid.). other counter-terrorism disruption strategies in 'five eyes' countries have included the creation of cve policies and interventions as well as the disruption or take down of social media venues advocating politically motivated violence or recruitment to jihadist groups. regardless of the complexity of post 9/11 terrorist attacks-such as the multi-site attacks in paris 2015 orchestrated by a group; or the knife attack against two police officers in australia in 2014 by one individual-disruption strategies employed by law enforcement and national security intelligence agencies are also likely to be usefully employed in the bio-threat and risk context. just how useful strategic and tactical disruption strategies used in conventional counter-terrorism will be in the bio-threat context depends on the nature of the intent and capability of individual threat actor(s) and the risks posed by their actions. the effectiveness of disruption strategies in the bio-threat context like conventional terrorist attacks are contingent on a range of variables that are unique to that event. in the bio-threat context, leaving aside large levels of uncertainty about the future threat trajectory for bio-terrorism, effective disruption will rely on law enforcement and intelligence agencies understanding how the intention, capability and opportunities of threat actors operating in a particular environment-make an attack possible. intention, capability and opportunities will differ along the threat continuum from individual to group and from state to non-state actor. for example, in the research facility, hospital or high containment laboratory environment, intention, capabilities and opportunities may be shaped by actors that are internal, external or an indirectly involved in the facility (perman et al. 2013: 95) . threats can also be as perman suggest overt or clandestine (ibid.). in some cases, if a scientist is motivated politically (for religious, environmental or political reasons) to commit an act of violence by using a biological agent it may be easier to disrupt their activities if they are public about their agenda. however, in the case of a clandestine plan it could be very difficult to disrupt an attack launched externally or internally in a contained lab. nonetheless, as we saw with historical cases of lone actor threats such as the bruce ivins amerithrax incident there are likely predicate steps in the process to carrying out an attack which are revealable. similarly, in the lesser known case of dr. larry ford, who was suspected of murdering his business partner in a biotech company-the police subsequently found a cache of weapons, white supremacist writings and allegations that he attempted to infect six mistresses with biological agents (perman et al. 2013: 94) . again even in cases of lone actors such as this whose attack planning is more clandestine; there may well be an abundance of 'warning intelligence' that if collected and assessed in time might be useful in disrupting a lone actor planned attack. while it can be difficult to disrupt a lone actor plot, more elaborate ones by a group of conspirators could in some circumstances provide greater opportunities for interception and disruption by law enforcement and intelligence agencies. this is because in plots involving multiple actors there are more stages before the attack can be carried out. some stages such as communications, procuring supplies and transport also provide points of vulnerability, where threat actors can be exposed to authorities and disrupted. so an external threat such as a terrorist attack against a high containment laboratory might involve communications amongst group members, financing of the plan, purchasing of explosives and surveillance of the facility's perimeters. each stage presents opportunities for disruption providing intelligence and information is available to law enforcement and intelligence agencies. similarly a theft of intellectual property or biological material from a private sector biotechnology company might result from either an external criminal group; or state actor pressuring or paying an employee to steal information on their behalf. again, intelligence may exist already about the criminal group or the compromised employee that provides opportunities for disruption. in an ideal world of course, it would be desirable if all potential biothreat and risk scenarios could be prevented early in the intent stage, where they are mainly an idea in a perpetrator's head. pre-employment screening, including criminal checks and select agent risk assessments will show up some individuals, who are not suitable to access and work with dangerous biological agents. this will have an early disruptive effect but it is not fool proof. people can lie about their circumstances in security suitability checks allowing them the ability to access and plan malevolent acts in a secure biological facility rather than just thinking about them. once operating inside a facility-depending on the nature of the planned attack it can be very difficult for law enforcement and the intelligence community to respond quickly enough to disrupt the attack before its fully implemented. in all threat scenarios (simple to complex) in addition to the mandatory background checks for workers, each scientific institution needs to develop a full suite of threat assessments that can be updated regularly on different threat actors, including but not limited to: visitors, criminals, lone actor attacks (internal and external), terrorist and issued motived groups, international terrorists groups and foreign powers (perman et al. 2013: 94) . these threat assessments should be developed by an institution's internal security department in collaboration with local law enforcement. the relatively low number of threat scenarios that have taken place involving bio-agents since 9/11 will likely mean that there will be many intelligence gaps in assessing the intent, ability and opportunity of different threat types. however, providing baseline threat assessments will begin to build pictures of threats scenarios that should help promote better biosafety measures as well as opportunities to disrupt threats earlier should they begin to emerge. in summary, law enforcement and intelligence agencies working on bio-threats and risks of the future can learn a lot from their counter terrorism colleagues. since 9/11, countering terrorism continues to produce lessons for the law enforcement and intelligence communities on how more effectively to disrupt emerging terror plots before they are implemented. the knowledge gained from investigating conventional terrorism attacks that don't involve biology can help those working on future bio-threats and risks by seeing how to optimise the legislative, intelligence, investigative and community response to terrorism while also learning lessons from contemporary counter terrorism efforts. in particular, the increase in lone actor terrorist attacks in the westoften with short notice underscores that either an insufficient amount of intelligence or types of intelligence that cannot be revealed in court often exists. in these cases, other tactical disruption strategies are gaining traction amongst 'five eyes' countries to mitigate the threat and harm posed by terrorists. similarly, given the complexity of threat scenarios that could arise from the exploitation of dual use biotechnology, it may be difficult in some cases to collect sufficient solid 'evidence' or use bio-forensics to attribute confidently for a conviction on bioterrorism or bio-criminal activity. nonetheless, the various counter terrorism strategies discussed above point to ways threat actors may be disrupted on lesser offences while also providing a greater intelligence dividend on other individuals involved. the final knowledge area and stakeholder group that intelligence agencies and investigators working with bio-threat and risks may learn more from is cyber security. as koblentz and mazanec (2013) suggest there are a lot of common characteristics between biological and cyber weapons including but not limited to: difficulty of attribution and how multiple technologies can be used for offensive, defensive and civilian applications (421-425). both authors argue because of these similarities there is likely a lot cyber can learn from how bio-threats have been managed historically. this is undoubtedly true, though in this section the focus will be the opposite-i.e. what can intelligence and investigative agencies working on bio-threats learn from the cyber threat and capability landscape? even a cursory review of the literature suggest that there are a number of areas where current cyber research and practice could inform the 'five eyes' intelligence communities understanding of current and emerging bio-threats and risks. space does not allow an exhaustive discussion on all of them, but there are three cyber areas in particular; where i believe those working with bio-threats and risks could benefit greatly from knowing more about in order to learn the lessons from the cyber context as well as identifying good intelligence and investigative practice. these areas are: the dark web, cyber terrorism and cyber espionage. i will discuss each briefly in turn. turning to the dark web environment first here we are referring to the content on the internet that is 'not indexed by standard search engines' (weimann 2016: 196) . much of the dark web is hidden or blocked and can only be accessed by specialised browsers. given the relative anonymity it provides, the dark web has seen the proliferation of child pornography, credit card fraud, identify theft, drugs and arms trafficking amongst other illicit offences. the dark web only emerged in recent years though law enforcement and intelligence agencies have made some in roads into its penetration and disruption. the fbi's shut down of the dark web site silk road, which operated between february 2011 and october 2013 was to that point the largest and most sophisticated anonymous online market place for illicit drugs (zajã¡cz 2017) . new technological solutions are also being developed to better identify, collect and analyse illicit activity on the dark web, including darpa's memex software, which helps catalogue dark web sites (weimann 2016: 203) . nonetheless, all 'five eyes' intelligence communities will need to continue to develop their collection, analytical and investigative capabilities in the dark web content to profile more accurately various illicit market places in order to orchestrate impactful disruption activity across multiple markets. although it is unknown, at least in an unclassified sense the extent to which illicit markets exist that could benefit bio-threat actors (criminals or terrorists), undoubtedly law enforcement and intelligence agencies, who are given a watching brief on emerging bio-threats and risk should be exploiting the dark web more for opportunities for disruption. a first step might be first to map the bio-terrorism literature and identify researchers, who have access to bioterrorism agents/disease research, domain, institutions, countries and emerging topics and trends in bioterrorism agents/disease research. chen shows how by using informatics research it might be possible to use knowledge mapping techniques, to analyse productivity status, collaboration status and emerging topics in the bio-terrorism domain (chen 2011: 335-367) . additionally, other intelligence and investigative teams that are working on non-bio threats such as conventional terrorist attacks, terrorism financing, drug trafficking or even child sexual exploitation may come across offenders, who have links to others interested in exploiting dual use biological agents for malevolent objectives. so the work currently going on by intelligence agencies working on broader cyber security issues such as cybercrime or cyber terrorism is directly relevant to improving collection and analysis against emerging bio-threats and risks. developments in the second area cyber-terrorism provides another opportunity for bio-threat intelligence and investigative teams to learn off their colleagues working on cyber threats. in the past we often think about the classical 'bio-terrorism' attack involving the aerolising and dispersal of a dangerous pathogen like anthrax into a crowded place. this mode of attack may still be chosen in the future by a terrorist group (leaving aside for a minute the technical difficulties of such an attack). though committed acts through cyber opens up other choices for a bio-attack. cyber security specialist's knowledge of cyber terrorism is still developing. we have seen for example groups like the taliban and is increasingly use computers for recruitment, propaganda and communications, but it remains difficult to know empirically how many of the current virtual attacks such as ransomware can be attributed to terrorist or led to deaths or impacted critical infrastructure in significant ways. such attacks could just as easily be attributed to cyber hackers (criminals) or state sponsored espionage both issues we will return to shortly (riglietti 2017; bernard 2017; heicker㶠2014) . nonetheless, it is clear that terrorism groups are increasing their use of computers including the dark web given they know that intelligence communities are monitoring the surface internet and social media. in august 2014, al-aan tv reported a laptop belonging to a tunisian member of is captured in syria contained thousands of documents from the dark web including 19 pages about making biological weapons in a way to impact the biggest number of people (weimann 2016: 200) . there have also been cases where is has carried out a series of cyber-attacks, 'exclusively computer based, which in one instance even led to the disclosure of private information regarding us government officials, from private conversations to work and email addresses' (riglietti 2017: 19) . the final area of cyber security that is useful for bio-threat intelligence and investigative teams to reflect on relates to cyber hacks and espionage. putting hacks and espionage together is not meant to suggest that both are always linked-though we have seen in the russian interference in the 2016 us presidential election they can be. china too is playing an increasingly sophisticated and aggressive cyber espionage strategy aimed at political interference and stealing intellectual property (inkster 2015) . there seems little doubt that the extent of hacking (unauthorised access to a computer or network) being perpetrated by state and non-state actors is on the rise and network vulnerabilities across the civil and military space remain. in a recent article, fbi assistant special agent in charge (chicago), todd carroll said the average time between an unauthorised user getting inside a network and the user being detected is 150 days-'a lifetime in cyber means'. todd went on to say that 57% of business owners don't have a dedicated employee or vendor monitoring for cyber-attacks (stone 2017) . we have also seen in recent years the growth in malware and ransomware attacks across the globe. for example, in 2017 the wannacry ransomware attack caused 230,000 infections across 124 countries (locking down banking, energy and manufacturing systems) (schilling 2017) . the dark web also provides terrorist and criminal groups opportunities to operate botnet campaigns in anonymity that can remotely operate networks of computers to commit attacks on other systems including critical infrastructure. again there is insufficient space to provide a full survey of all the cyber hacking and espionage threats, and indeed what to do about them is beyond the scope of this chapter (clarke and knake 2010: 257-280) . nonetheless the hacking attacks-whether they are state sponsored (espionage) or non-state actors (terrorists or criminals) provide another rich source of knowledge to be collected and assessed that can be used by those working on emerging bio-threats and risks. for example, it would seem unwise for bio-threat intelligence and investigative teams to not learn from the fast changing angles of cyber-attack from hackers given how the physical security of biological institutions, their intellectual property and the kinds of biological products produced in such facilities is reliant on secure cyber systems. we have seen in recent years the take down of government websites involving ransomware attacks on both government and private sector networks. increasingly more information is being shared and stored via the cloud. what would be the impact of a major ransomware attack that locks down the entire bio-surveillance capability of a public health authority such as cdc do to maintaining national health security? could a cybercriminal group infiltrate the network of a major biodefense company steal ip and sell it to a terrorist group on the dark web? could research stored via the cloud on non-secure networks relating to the genetic sequences of pathogens be stolen by a terrorist group or state actor to engineer bio-weapons? (blue ribbon project 2015: 44-46) . in all the three areas discussed above, a fuller development of links between those working in the cyber intelligence collection and analysis streams, and those who might examine emerging bio-threats and risks is a necessary first step in bringing relevant knowledge and practice from cyber security to bio-threat stakeholders. in this final section the attention is turned to what kind of stakeholders play a role in treating bio-threats and risk? second, in performing these roles, how can they help the 'five eyes' intelligence communities build better capability (knowledge, practice and technology) about treating actual or emerging bio-threats and risks? as we have seen so far the management of bio-threats and risks is potentially a crowded enterprise with many stakeholders (beyond the intelligence communities) playing critical roles. in this section, i have grouped them into three 'types of stakeholder': first responders, science and technology stakeholders and security stakeholders. these are not three distinct clusters of unique stakeholders that do not interact with each other. depending on the nature of the bio-incident that has occurred, one would expect to see a close interaction amongst the various knowledge brokers and practitioners from each group. for example, a release of a synthetically manufactured select agent in an airport should result in the combined strategic and tactical contributions from first responders, engineers and security personnel rather each being delivered in isolation. an uncoordinated delivery of knowledge, practice and expertise to treat an unfolding bio-threat/risk from multiple stakeholders will not result in the best outcome for mitigating the risk or disrupting future potential of similar threats occurring. again as with previous sections, the focus here is not a deep exploration of the specific knowledge, practice or technology of all stakeholders involved potentially in the treatment of bio-risks. this would be an impossible task. instead this section will explain briefly what each of the three broad stakeholder categories (first responders, science and technology and security) can do broadly to treat bio-risks (current or potential), what intelligence communities can learn from this in ways that extend their capabilities to manage bio-threats and risks. the label 'first responders' is a descriptor for a much broader range of stakeholders including: fire/hazmat, paramedics, emergency responders, health and hospital service providers. each would play a different role in both responding to and treating a bio-incident depending on the type of biological hazard, their jurisdictional and legislative responsibilities and fiscal capacity. in all 'five eyes' countries with perhaps the exception of new zealand (with a smaller population and only one national government) the complexity of response will be particularly governed by the overlapping roles that various local, state and federal first responders might play. obviously in the us with multiple federal, state and local agencies, the coordination of first responder efforts to a bio-incident presents more challenges than other 'five eyes' countries such as australia and the uk with less agencies and jurisdictions. there is not an abundance of academic literature on the role of first responders in treating bio-threats and risks. this lack of evidence makes it difficult to assess accurately what first responders can do to treat bio-threats and risks, what the challenges are and what the intelligence community can learn from these important stakeholders. there is however, some research available that can increase the intelligence communities' understanding of first responder capabilities to treat bio-threats and risks as well as illuminate some of the challenges in doing so. this research should provide at least a start to what the intelligence community can learn from first responders as they deploy their knowledge and practice to disrupt and treat bio-threats and risks. 9/11 and the amerithrax incident provided a catalyst for law enforcement and public health agencies to work closer together to respond to an unfolding threat. since amerithrax, across the 'five eyes' countries further work has been done to better coordinate the work of law enforcement and public health agencies on treating bio-threats and risks. but such efforts have not involved routinely the broader spectrum of national security intelligence agencies, who have tended to play a more strategic and adhoc role compared to their law enforcement counterparts. overall, policy, coordination and legislative efforts to bring first responders and members of the intelligence and law enforcement community together have had only mixed success for a number of reasons. in 2007, a study of how law enforcement and public health agencies in the us, canada, uk and ireland work together on bio-threat incidents identified several common barriers to improving multi-agency responses (strom and eyerman 2007) . these included cultural, legal, structural, communication and leadership barriers (ibid.: 135). ten years on from strom and eyerman's research, other researchers have made similar observations about the ability of first responders to manage effectively a bio-threat incident and to work with law enforcement and intelligence community on such tasks. but it's not just the capability issues raised above, other research points to other technical challenges to treating the impact of bio-threats and risks in the physical environment. for example, research by chemists and environmental engineers show that given the varying nature and strains of the bacteria-the science for assessing risk of exposure may not be able to provide a fully accurate risk assessment of a building's vulnerability or resilience to a bio-attack nor-in some cases whether first responders have effectively 'cleaned the environment up after exposure' (canter 2007; taylor et al. 2013) . a lack of effectiveness in responding to a biothreat incident in a local area obviously can have broader public sector implications in both treatment and preparedness of bio-risks. for example, gerstein (2017: 86) citing a study by advocacy group trust for america's health reported that 26 states and dc scored 6/10 or lower on a scale for preparedness. additionally, since 9/11 major disease outbreaks such as sars and ebola have also demonstrated fragility in parts of the world, including some 'five eyes' country's public health response capability, which remains a concern if there was a major bio-terrorist event. the blue ribbon study project report raised similar concerns about the capability of certain responders including those local, state and federal agencies that might be involved in decontaminating sites following a bio-incident. in the us, the report raised similar coordination issues between federal, state and local agencies in which first responder agency would take the lead in decontaminating and remediating environments and how other agencies would get involved to ensure the attack site was deemed safe for people to return (blue ribbon study project 2015: 26). one underlying theme arising from the studies mentioned on first responder's roles in treating bio-threats and risks is that the intelligence community must share more information with emergency services on the nature of the threat they are meant to respond to. this is not to suggest that in all the 'five eyes' countries that no sharing is going on. my selected interviews with law enforcement and intelligence officials in each country did not give the impression that no sharing was going on with first responders. however it is clear if the local fire officers or emergency staff in a hospital are meant to better respond to a bio-incident they will need regular, consistent, reliable, real-time information and intelligence. this is vital to them safely securing the scene, or rapidly diagnosing and treating infected patients while also keeping themselves safe. importantly too, the more intelligence they receive will likely be helpful in first responders preserving any relevant evidence from the scene that might be needed by the either the law enforcement and intelligence communities. gerstein makes a valuable point when referring to improving bio-preparedness and response activities, when he suggests that first responders need to be seen as part of a complex system rather than each representing a series of programs (gerstein 2017: 88) . in addition to the range of knowledge and practice the intelligence community can learn from first responders, arguably the biggest lesson they can learn is to seek to better understand the 'linkages among disparate disciplines (biodefense, public health, emergency management), government, industry, the scientific community and themselves to better support first responders' (ibid.). if the 'five eyes' intelligence communities were able to create the necessary national health security coordination arrangements suggested in chapter 6 such as the health security coordination council and the national health security strategy, then through these institutions further intelligence sharing mechanisms could be established to improve information flow between the intelligence communities and first responders at federal, state and local levels. however, first further research is required to investigate how law enforcement and intelligence communities work currently with first responders to identify and as much as possible ameliorate the cultural, legal, communication and leadership barriers that persist. a second cluster of knowledge and stakeholders for treating bio-threats and risks could be loosely described as 'science and technology' stakeholders. in earlier sections, under the relevant headings (prevention and disruption), significant space was devoted to how our intelligence communities can learn from a range of stakeholders working across a diverse array of disciplines (including bio-surveillance, public health, biosafety, criminology, counter terrorism and cyber). in each of these disciplines, discussion included exploration of relevant science, technology and knowledge useful for the intelligence community in preventing and disrupting bio-threats and risks. some of that discussion, for example bio-surveillance, biosafety and strengthening global health is also relevant to our focus here in treating bio-incidents. however, in this section the focus is not what the intelligence community can learn from stakeholders working in the above disciplines, but rather what they can learn from disciplines more removed from the biological sciences or relevant social sciences (e.g. engineering or security studies). what can the intelligence community learn from physical, mechanical or environmental engineering? there are multiple roles engineering specialties could play and are playing in preventing, disrupting and treating bio-threats and risks. for one and historically, the us dod has relied on engineers, microbiologists to provide advice on weaponisation of biological agents under a range of scenarios and conditions (state actor and terrorists threats). for example, even pre 9/11, between 1999 and 2000 dtra funded project bacchus to see if a team of scientists and engineers, who allegedly did not have extensive experience in bio-weapons could make bio-weapon facility using just commercially available items. the objective was to see if the team could make anthrax successfully without the detection of the intelligence community, though it was later revealed that this team did have substantive technical knowledge and support throughout this project (vogel 2013: 41-43) . engineers have also long been engaged in studying aerolisation dynamics, which has become increasingly a multi-disciplinary collaboration of environmental engineers, biomedical engineers, microbiologists, chemists and epidemiologists (xu et al. 2011) . related to aerolisation studies has been the work of hardware and software engineers-many of whom came from the aerospace and automotive industries that have brought their skills into modelling bio-terrorism attacks to help first responders predict how airborne particles might move through sections of a city under certain weather and windflow conditions (thilmany 2005) . other engineering studies, sometimes referred to bio-protection studies have been important in the design of the heating ventilation and air conditioning (hvac) systems used to resist biological contaminants. much of this research became activated after the amerithrax incident, and is designed at reducing the health consequences from airborne contaminants by augmenting heating and air conditioning systems (ginsberg and bui 2015) . another focus of engineering led research relates to improving the portability, speed and reliability of bio aerosol monitors for pathogens. one recent study has been working on a device that would be fully portable and automated-capable of detection of selected air-borne microorganisms on the spot-within 30 to 8 minutes depending on the genome and particular strain of the organism (agranovski et al. 2017 ). in this last sub-section in our exploration of what other stakeholders may be useful in treating bio-threats and risks we turn our attention to the role of security officers. i am conscious in the discussion above regarding prevention and biosafety much was said about the role of security officers and managers in promoting biosecurity and biosafety across all sectors of the bio-sciences enterprise (e.g. research centres, hospitals, biotechnology companies, public and private labs). in this section, we focus instead on the role of security officers and managers across the broader economy-beyond biosciences. as argued in previous chapters, in addition to taking a one health perspective to bio-threats and risk, 'five eyes' intelligence communities and their law enforcement colleagues need to also understand the potential development of biothreats and risks beyond the technical world of biotechnology and labs to include also in their wider social, economic and community contexts. hence in this section, we are referring to the role of security officers and companies that work across the international, national, state and local economies in each 'five eyes' country. given the trajectory of most (if not all) future bio-threats is unknown, our intelligence communities need to be forging more formalised (less adhoc) relationships with security officers in a range of non-biotechnology industries (banking, mining, food supply, agriculture, critical infrastructure). as nalla and wakefield (2014) argue several factors have increased the role of private security since the second world war. increased economic wealth, enhanced security technology (alarms, access control and cctv), in addition to an increase in the control by a number of private sector companies of publicly accessible places have, amongst other factors all contributed to the growth in private sector security (ibid.: 727). while it is difficult to generalise 'as the functions of security officers/agencies are as varied as the organisations that employ them' (ibid.: 731), their functions and roles cut across many facets of each 'five eyes' nation to include office buildings, warehouses, shopping malls, education establishments, residential complexes and critical infrastructure. one often thinks of the classic scenario of a security guard standing in front of a physical gate, which is one role of many others which might also, depending on their functions include traffic control, surveillance, responding to emergencies, security vetting. in the security role of complex large companies, airports and electricity plants, it is likely that the security officers will have a deep understanding of their physical and virtual security environments and this kind of expert knowledge would be integral for them and the intelligence community gaining threat awareness, prevention, surveillance, disruption, treatment and recovery to bio threats and risks which may manifest in their operating environment. historically however, the relationship between intelligence communities (including law enforcement) and private sector security has not been optimal partially because a lack of trust between both (ibid.: 739). however, several studies on private and public sector security do show several areas of improvement across each 'five eyes' country. some of these improvements have been initiated by governments such as in the uk making significant cuts to policing in the late 1990s and mid-2000s and seeking the private sector security sector to pick up more cheaply what were considered less core policing such as offender management and transfers of prisoners. in other cases, governments were interested in engaging with the private sector to extend their own security and intelligence collection capabilities with terrorism. connors et al. (2000) , wakefield (2003) , and rigakos (2002) provide more detailed analysis of a range of factors that have been involved in building partnerships with private sector security companies in the us, uk and canada respectively. 9/11 and of course subsequent terrorist attacks in many western countries has seen a more focused attempt by 'five eyes' countries to reach out to the private sector-including private sector security given many attacks occur in public places owned or managed by the private sector. threats as well to public and privately owned critical infrastructure (aviation, power, water, and telecommunication) have also influenced 'five eyes' government's closer liaison with the private sector. for example in the us, dhs has established a private sector office to provide government advice on relevant security issues to the private sector as well as promoting public-private partnerships. in australia, since 9/11 parts of the australian intelligence community, particularly asio has developed closer links with the private sector. in 2004 australia's attorney general's department created the business-government advisory group on national security to provide a vehicle for the government to discuss a range of national security issues and initiative with ceos and senior business leaders (dpm &c 2015: 6). the group later (2014) evolved into the australian governments industry consultation on national security (ibid.). more recently (2017) the australian government released its strategy for protecting crowded places from terrorism. this significant policy document was developed in close partnerships with federal, state and local governments, the intelligence community and the private sector. the key objective being to assist owners and operators to increase the safety, protection and resilience of crowded places across australia (anzctc 2017). an interesting aspect of this strategy is that it places the primary responsibility for protecting sites and people on private sector businesses. similar policy articulations have been declared in the uk's counter-terrorism strategy (hmg 2011) and canada's approach to counter-terrorism (canadian government 2011). in summary, it's clear that various agencies of the 'five eyes' intelligence communities and their broader law enforcement counterparts have increased their liaison and implemented various initiatives with private sector industry. what is less clear is the nature and extent of these as they relate to the prevention, disruption and treatment of potential bio-threats and risks. much is unknown, for example, about whether intelligence and law enforcement communities are actively working in partnership with the private sector beyond the classical threat typologies of basic terrorist's tactics, improvised explosive devices or vehicle born attacks. given the low probability high impact nature of the evolving bio-threat environment, it is likely that many private sector companies (banking, shopping malls, mining, hotels) see little need to include bio-threats in their security risk management plans or indeed consult with intelligence and law enforcement communities on them. while it is important not to be alarmist on low probability threats that are more likely on balance to effect the biosciences community rather than the broader private sector economy, it seems unwise for the latter not to consider the impact of such bio-threats on their operations and to at least have formalised dialogues on these with the intelligence community. but such a dialogue will in the future rely on several factors identified already by researchers coming together to develop more effective public-private crime prevention strategies. prenzler and sarre list several factors including: a common interest in reducing a specific crime, leadership, mutual respect, information sharing based on high levels of trust in confidentiality and formalised mechanisms for consultation and communications (prenzler and sarre 2014: 783) . this chapter surveyed the role of external stakeholders external (to the 'five eyes' intelligence communities) in preventing, disrupting and treating bio-threats and risks. depending on the particular bio-threat a diverse array of stakeholders could provide knowledge, skills and capabilities to the intelligence community. the large number of disciplines and stakeholders with relevant technical knowledge suggest that they will continue to play a critical role in the prevention, disruption and treatment of bio-threats and risks. in many cases, such as in biosurveillance, forensics and even engineering the scientific and technical stakeholders discussed here may play a greater role than the traditional intelligence and investigative response to managing bio-threats and risks. the chapter also highlighted that although each 'five eye's intelligence community has a wealth of knowledge to tap into from stakeholders, however in most cases all stakeholder groups are faced with their own theoretical and practical limitations. analysts and investigators working on bio-threats and risks need to understand these limitations while also seeking to build deeper and more formalised partnerships with scientific, technical and cross disciplinary stakeholders. in the final chapter 8, we shift the focus away from the practice and processes involved in interpreting bio-threats and risks to oversight and accountability issues. given the legislative, ethical and normative challenges modern intelligence practice faces, particularly in understanding the potential threat trajectory of synthetic biology, what role can oversight and accountability play in achieving the objectives of the intelligence communities in liberal democracies? miniature pcr based portable bioaerosol monitor development australia's strategy for protecting crowded places from terrorism. anzctc, australian government biological weapons-related common control lists biosafety and biosecurity as essential pillars of international health security and cross-cutting elements of biological non-proliferation biosecurity in research laboratories blue ribbon study panel on biodefense. a national blueprint for biodefense: leadership and major reform needed to optimise efforts biodefense special focus: defense of animal agriculture group cdcw. framework for evaluating public health surveillance systems for early detection of outbreaks: recommendations from the cdc working group insider threats us military says it mistakenly shipped live anthrax sample building resilience against terrorism: canada's counter terrorism strategy. ottawa: government of canada biosecurity imperative: an urgent case for extending the global health security agenda addressing residual risk issues at anthrax clean up. how clean is safe? 90 day internal review of the division of select agents and toxins report of the advisory committee to the director bioterrorism and knowledge mapping dark web exploring and data mining the dark side of the web the traditional tools of biological arms control and disarmament cyber war the politics of surveillance and response to disease outbreaks operation cooperation, guidelines for partnerships between law enforcement and private security organisations understanding the lone wolf phenomena: assessing current profiles fda found more than smallpox vials in storage room more deadly pathogens, toxins found improperly stored in nih and fda labs marburg biosafety and biosecurity scale (mbbs): a framework for risk assessment and risk communication review of australia's counter terrorism machinery. department of prime minister and cabinet beyond the ebola battle winning the war against future epidemics iarpa director jason matheny advances tech tools for us espionage high containment laboratories: national strategy for oversight is needed biosurveillance non federal capabilities should be considered in creating a national biosurveillance strategy high containment laboratories: assessment of the nation's need is missing. testimony before the subcommittee emergency preparedness, response and communications, biosurveillance observations on the cancellation of biowatch gen-3 and future considerations for the program testimony before the subcommittee on emergency preparedness response and communications, committee on homeland security, house of representatives gao high containment labs coordinated actions needed to enhance the select agent program's oversight of hazardous pathogens (vol. gao 18-145) predicting virus emergencies and evolutionary noise the biological and toxin weapons convention. national security and arms control in the age of biotechnology glaring gaps: america needs a biodefense upgrade the neglected dimension of global security. a framework to counter infectious disease crisis bio protection of facilities national biosafety systems risk and benefit analysis of gain of function research final report cyber terrorism: electronic jihad. strategic analysis global health security: the wider lessons from the west african ebola virus disease outbreak contest: the uk's strategy for countering terrorism. london: her majesty's government cyber espionage. adelphi series from detection to disruption: intelligence and the changing logic of police crime control in the uk a disruptive influence? preventing problems and counter violent extremism policy in practice government biosurveillance to include social media implementation of the international health regulations (2015) in the african region advances in anthrax detection: overview of bioprobes and biosensors living weapons viral warfare: the security implications of cyber and biological weapons biological weapon convention ebola response impact on public health programs the politics of surveillance and response to disease outbreaks cdc's response to the 2014-2016 ebola epidemic west africa and the united states. mmwr, supplement the proliferation security initiative: a model for future international collaboration biosafety in the balance digital disease detection: a systematic review of event-based internet biosurveillance systems implementing the global health security agenda lessons from the global health and security programs basic principles of threat assessment the role of partnerships security management the research impact handbook top us intel official calls gene editing a wmd threat the para police defining the threat: what cyber terrorism means today and what it could mean tomorrow signale-early warning system laboratory biorisk management biosafety and biosecurity revealed: 100 safety breaches at uk labs handling potentially deadly diseases. the guardian ransomware101-how to face the threat cdc probe of h5n1 cross contamination reveals protocol lapses, reporting delays pandemic readiness review says $4.5 billion a year needed secretary tillerson lauds global health security agenda disruptive innovation can prevent the next pandemic natural or deliberate outbreak in pakistan: how to prevent or detect and trace its origin: biosecurity, surveillance forensics army probe of anthrax scandal raises more red flags physical elements of biosecurity who isn't equipped for a pandemic or bioterror attack? the who lessons from the anthrax letters the week in fintech: fbi agent says cybersecurity practices need to change interagency coordination in response to terrorism: promising practices and barriers identified in four countries looking at the formulation of national biosecurity education action plans the role of protection measures and their interaction in determining building vulnerability and resilience to harms way engineering software and micro technology prepare the defense against bioterrorism phantom menace or looming danger? selling security. the private policing of public space intelligence and intelligence analysis australian national security intelligence collection since 9/11: policy and legislative challenges going dark: terrorism on the dark web ebola virus disease in west africa-the first nine months of the epidemic and forward projections signal recognition during the emergence of pandemic influenza type a/h1n1: a commercial disease intelligence unit's perspective. intelligence and national security utility and potential of rapid epidemic intelligence from internet-based sources labs cited for 'serious' security failures in research with bioterror germs silk road: the market beyond the reach of the state key: cord-012503-8rv2xof7 authors: levintow, sara n.; pence, brian w.; powers, kimberly a.; sripaipan, teerada; ha, tran viet; chu, viet anh; quan, vu minh; latkin, carl a.; go, vivian f. title: estimating the effect of depression on hiv transmission risk behaviors among people who inject drugs in vietnam: a causal approach date: 2020-08-24 journal: aids behav doi: 10.1007/s10461-020-03007-9 sha: doc_id: 12503 cord_uid: 8rv2xof7 the burden of depression and hiv is high among people who inject drugs (pwid), yet the effect of depression on transmission risk behaviors is not well understood in this population. using causal inference methods, we analyzed data from 455 pwid living with hiv in vietnam 2009–2013. study visits every 6 months over 2 years measured depressive symptoms in the past week and injecting and sexual behaviors in the prior 3 months. severe depressive symptoms (vs. mild/no symptoms) increased injection equipment sharing (risk difference [rd] = 3.9 percentage points, 95% ci −1.7, 9.6) but not condomless sex (rd = −1.8, 95% ci −6.4, 2.8) as reported 6 months later. the cross-sectional association with injection equipment sharing at the same visit (rd = 6.2, 95% ci 1.4, 11.0) was stronger than the longitudinal effect. interventions on depression among pwid may decrease sharing of injection equipment and the corresponding risk of hiv transmission. clinical trial registration clinicaltrials.gov nct01689545. electronic supplementary material: the online version of this article (10.1007/s10461-020-03007-9) contains supplementary material, which is available to authorized users. despite global progress in combating the hiv epidemic, people who inject drugs (pwid) remain disproportionately at risk of hiv infection in southeast and central asia and eastern europe [1] [2] [3] [4] [5] . sharing injection equipment is one of the most efficient means of hiv transmission [6, 7] , and in these regions, pwid have limited access to and suboptimal use of harm reduction services and antiretroviral therapy (art) [8, 9] . the persistence of injection drug use and viremia, without adequate preventive services, results in a high risk of hiv transmission to injecting or sexual partners [10] . the burden of depression is high among pwid and may further interfere with hiv prevention efforts. up to 50% of pwid suffer from severe depressive symptoms [11] [12] [13] [14] [15] , and the presence and severity of depressive symptoms are closely linked to frequency of injection and risk of relapse, suggesting a bidirectional relationship between depression and injection drug use [16] [17] [18] . comorbid depression consistently results in poor hiv treatment outcomes, such as lowering art use and viral suppression [19] [20] [21] [22] [23] [24] . the online version of this article (https ://doi.org/10.1007/s1046 1-020-03007 -9) contains supplementary material, which is available to authorized users. depression may be an important driver of continued hiv transmission among pwid if symptoms increase transmission risk behaviors (e.g., sharing injection drug use equipment, engaging in condomless sex) in the absence of viral suppression. however, while the deleterious effect of depression on hiv treatment outcomes is well established across populations, its effect on the injecting and sexual behaviors that can facilitate hiv transmission or acquisition is not well understood among pwid. although there is substantial evidence that depression increases sexual risk behaviors among men who have sex with men (msm) [25] [26] [27] [28] [29] , few studies have focused on pwid and assessed injecting behaviors. specifically, in vietnam, the focus of this study and a setting where the hiv epidemic is concentrated among men who inject drugs [30, 31] , there have been no prior studies of the relationship of depression with injecting and sexual behaviors. existing studies on depression and hiv transmission risk behaviors among pwid have suffered from several methodological limitations. to our knowledge, all previous studies that include pwid populations have assessed only correlations between depression and transmission risk behaviors, without inferring causality [32] [33] [34] [35] [36] [37] [38] [39] [40] . in these studies, depression and risk behaviors have typically been evaluated for the same time period (e.g., self-report covering the last month), without the ability to infer whether depression preceded risk behaviors or vice versa [35-37, 39, 40] . potential confounders of the relationship between depression and risk behaviors were also measured for the same retrospectively assessed time period. studies that used traditional statistical adjustment for these contemporaneous covariates [34, 38] may have induced bias if these variables acted as causal mediators rather than confounders [41] . in addition, although depression is known to be episodic [42] , prior analyses have primarily relied on a single assessment rather than accounting for changes in both depressive symptoms and time-varying confounders [33, 34, 38, 40] . possibly stemming from these methodological issues, existing evidence for an association between depression and transmission risk behaviors in pwid is inconsistent. while an early meta-analysis (that included studies among pwid) found little evidence for an association between depression and sexual risk behaviors [32] , more recent studies in pwid and msm populations have found higher sexual risk associated with depression [34, 35, 40] or a non-linear association [36] in which mild symptoms are most predictive of sexual risk. the few studies that have evaluated the association of depression with injecting risk behaviors among pwid have suggested that depressive symptoms were associated with greater injecting risk behaviors [35, [37] [38] [39] . we sought to overcome past methodological issues by using a causal approach to estimate the effect of depressive symptoms on hiv transmission risk behaviors among pwid. we used marginal structural models, a tool for causal inference that accounts for time-varying exposures and confounders [43] [44] [45] , with longitudinal data from male pwid living with hiv in vietnam. we hypothesized that depression would increase behaviors associated with risk of hiv transmission to injecting partners (sharing injection equipment) and sexual partners (condomless sex). by examining depression as a potential underlying cause of hiv transmission through these behavioral mechanisms, we sought to provide clearer evidence about the potential for interventions against depression to avert future hiv infections among pwid. we used longitudinal data from a randomized controlled trial of an hiv stigma and risk reduction intervention among pwid living with hiv in thai nguyen, vietnam from 2009 through 2013 [46] . thai nguyen is a province in northeastern vietnam with an estimated hiv prevalence of 34% among its approximately 6000 pwid [47] [48] [49] . participants were recruited via snowball sampling from the 32 thai nguyen sub-districts (of 180 total) with the most pwid. recruiters (former and current pwid) approached members of drug networks in private places to discuss study enrollment and then accompanied or referred interested participants to the study site for screening. at screening, all participants were tested for hiv using two rapid enzyme immunoassay tests run simultaneously (determine: abbot laboratories, abbott park, il and bioline: sd, toronto canada), with discordant results resolved with a third rapid assay (hiv rapid test: acon, san diego, ca). the trial enrolled 455 participants who met the following eligibility criteria: 1) hiv-positive according to study test results, 2) male (given that 97% of pwid in thai nguyen are male), 3) age ≥ 18 years, 4) had sex in the past 6 months, 5) injected drugs in the previous six months, and 6) planned to live in thai nguyen for the next 24 months (the duration of the trial). questionnaire and laboratory data were collected at study visits every 6 months during 24 months of follow-up. the questionnaire collected information on demographics, injection drug use and other substance use, sexual behavior, depressive symptoms, quality of life, pre-study hiv diagnoses (baseline only), and art use. blood specimens were collected to confirm hiv infection at baseline and measure cd4 cell count at baseline and over follow-up. the exposure of interest was depressive symptoms over the past week, as assessed by the 20-item center for epidemiologic studies depression scale (ces-d), which has been validated as a reliable measure of depressive symptoms in vietnam [50, 51] . consistent with past work, we defined severe depressive symptoms as ces-d scores ≥ 23, mild depressive symptoms as scores [16] [17] [18] [19] [20] [21] [22] , and no symptoms as scores < 16 [15, [50] [51] [52] . the transmission risk behavior outcomes were any sharing of injection equipment (needles, syringes, solutions, or distilled water) and any condomless sex with a female partner, reported for the prior 3 months. we also descriptively examined the numbers of injection equipment sharing and condomless sex acts in the prior 3 months reported at each visit. questionnaire and laboratory data included potential confounders of the depression-risk behavior relationship. time-fixed covariates, which were reported at baseline and assumed to be stable throughout the study period, were marital status, age, employment status, intervention arm, history of overdose, alcohol use, hiv diagnosis prior to enrollment, and previous art use. employment and alcohol use could, in theory, vary over time, but these variables remained fairly constant in our population, motivating our decision to treat them as time-fixed. time-varying covariates measured at one time point may affect subsequent depression and risk behaviors; they may also be influenced by depression and risk behaviors from a previous time point. thus, time-varying covariates may act as either confounders or mediators, depending on the time point assessed [41, 43] . for this analysis, time-varying covariates were cd4 cell count category (< 200, 200-499, ≥ 500 cells/μl), depressive symptoms at the visit prior to exposure measurement, and transmission risk behaviors at the visit prior to exposure measurement. in the main analysis, we used marginal structural models to estimate the average causal effect of severe depressive symptoms on the risks of any injection equipment sharing or any condomless sex (separately) in the period three to 6 months later, controlling for time-fixed and time-varying confounders. we evaluated each risk behavior outcome (reported with respect to the prior 3-month period) at the next 6-month visit in order to temporally separate it from the exposure of depressive symptoms (hereafter referred to as the "longitudinal effect"). in a second analysis, to facilitate comparison with prior research, we used marginal structural models to estimate the association between depressive symptoms and risk behaviors reported at the same visit, where temporal ordering could not be differentiated ("cross-sectional association"). we repeated both analyses using three levels of depressive symptoms (severe, mild, none) in addition to the binary categorization (severe, not severe). we used inverse probability weighted estimation of marginal structural models [43, 53] . weights were estimated from a propensity score model for the probability of severe depressive symptoms as a function of time-fixed and time-varying confounders. time-fixed confounders had a constant (baseline) value over all visits; time-varying confounders used the value from the visit immediately preceding the visit at which depressive symptoms were assessed. in the main analysis, the propensity score model was estimated using logistic regression to model the probability of severe (vs. mild or no) depressive symptoms. in a second set of analyses, we used ordinal logistic regression to separately model the three levels of depressive symptoms (severe vs. mild vs. none). propensity score model diagnostics assessed positivity for all confounderdefined subsets of the study population. the denominator of the weights was the predicted probability of depressive symptoms from the propensity score model, and weights were stabilized using the marginal probability of depressive symptoms in the numerator. application of the weights to the study population removes the association between depressive symptoms and potential confounding variables included in the propensity score model, permitting estimation of a causal effect under key assumptions [53, 54] (see discussion). in the weighted study population, we estimated the risk difference (rd) for the risk behavior outcomes using generalized estimating equations (binomial regression models with an identity link) to account for repeated observations on participants [55] [56] [57] . for the longitudinal analysis, this weighted rd can be interpreted as the causal effect of depressive symptoms on the risk behavior outcome: that is, the difference in risk of the behavior in the period three to 6 months later if all participants had depressive symptoms compared with the risk if they all did not have depressive symptoms. to account for missing data due to missed study visits, we used multiple imputation by chained equations (mice) [58, 59] , imputing and analyzing 50 datasets. we included participants who were incarcerated or died during the study period in the main analysis up until the start of the 6-month follow-up interval during which incarceration or death occurred, censoring them after their final visit preceding death or incarceration. in sensitivity analyses, we instead used the imputed risk behavior outcome for that 6-month interval (and censored them at the start of the following interval), given the possibility of engaging in unmeasured risk behaviors prior to incarceration or death. for all estimates, our interpretation focuses on the point estimate and confidence interval, rather than statistical significance [60] . analyses were conducted using r version 3.4.3 [61] . this study was approved by the ethical review committees at all participating institutions. written informed consent was obtained from all participants. as required by inclusion criteria, all 455 participants were male, hiv-positive, and reported being sexually active and using injection drugs at baseline. the median age of participants was 35 years (interquartile range [iqr]: 30, 39), and half were married or cohabitating (47%) ( table 1 ). onethird had a high school education (34%), and the majority were employed full-time (69%). most participants were newly hiv-diagnosed at baseline (74%), while 15% had been previously diagnosed and were not taking art, and 11% reported being previously diagnosed and currently using art. the median cd4 cell count was 241 cells/µl (iqr: 126, 370). general health was rated as poor by 30%. nearly half reported injecting heroin daily (45%), 18% had a history of overdose, and 67% reported current alcohol use. participants completed between zero and four follow-up study visits (median = 4, iqr: 2, 4) at 6-month intervals over 24 months, with 87% completing at least one follow-up visit. at baseline, 44% of participants reported severe depressive symptoms (ces-d ≥ 23), 25% had mild symptoms (16 ≤ ces-d ≤ 22), and 30% had no symptoms (ces-d < 16). one quarter of participants reported sex without a condom in the prior 3 months (24%), with a median of 10 condomless sex acts reported for that period (iqr: 5, 20) . most participants reported sharing injection drug use equipment with injecting partners over the past 3 months (73%); these participants reported a median of 21 sharing acts during that period (iqr: 7, 52). after the baseline visit-when all participants received risk reduction counseling and the majority became newly hiv-diagnosed-sharing injection equipment and condomless sex decreased across trial arms (previously reported in [46] ). however, among the 397 participants attending ≥ 1 follow-up visit, 21% reported sharing injection equipment at ≥ 1 visit, and 7% reported condomless sex at ≥ 1 visit. the severity of depressive symptoms varied over time (supplemental fig. 1 ) with 59% of those attending ≥ 1 follow-up visit reporting severe depressive symptoms at least once. the percentage of participants experiencing competing events increased over time, with 8% incarcerated and 23% deceased at 24 months. in our main analysis, we estimated that severe depressive symptoms (compared to no or mild symptoms) increased the risk of sharing injection equipment by 3.9 percentage points (rd = 3.9%, 95% ci −1.7%, 9.6%) and decreased the risk of condomless sex by 1.8 percentage points (rd = −1.8%, 95% ci −6.4%, 2.8%) in the period three to 6 months later (table 2, fig. 1 ). in the crosssectional analyses, the association between severe depressive symptoms and contemporaneous injection equipment sharing (rd = 6.2%, 95% ci 1.4%, 11.0%) was stronger than the estimated longitudinal effect, while the association with condomless sex was attenuated (rd = −0.7%, 95% ci −4.5%, 3.0%). in analyses using three levels of depressive symptoms, there were small decreases in the risk of condomless sex as depressive symptoms increased, although all confidence intervals overlapped substantially ( table 2 , fig. 2 ). for injection equipment sharing, patterns of risk corresponding to the three levels of depressive symptoms differed between the longitudinal effect and the cross-sectional association. in longitudinal analyses, we observed a u-shaped relationship in which the risk of injection equipment sharing in the period three to 6 months later was 12.8% (95% ci 8.1%, 17.6%) among those with no depressive symptoms, 9.2% (95% ci 5.3%, 13.2%) among those with mild symptoms, and 13.8% (95% ci 9.1%, 18.5%) among those with severe symptoms. in contrast, in the cross-sectional analysis, we observed a monotonic increasing relationship in which those with no depressive symptoms had the lowest risk of 8.5% (95% ci 5.3%, 11.8%) while those with mild symptoms had a risk of 15.5% (95% ci 11.1%, 20.0%) and those with severe symptoms had a risk of 17.4% (95% ci 13.1%, 21.8%). we did not find appreciable differences in sensitivity analyses that varied censoring time for participants who were incarcerated or deceased (supplemental fig. 2 ). using longitudinal data and methods for causal inference, we found that severe depressive symptoms increased the risk of sharing injection equipment but not the risk of condomless sex among pwid. to overcome past methodological issues, we used marginal structural models to capture the episodic nature of depression, enforce temporal ordering of depression and transmission risk behaviors, and control time-varying confounding in the analysis. by focusing on pwid living with hiv in vietnam, a population at high risk of ongoing hiv transmission, we aimed to better understand depression as an underlying cause of behaviors associated with transmission. in our main analysis of injection equipment sharing in the period three to 6 months after assessment of depression, we found a rd of 3.9% (95% ci −1.7%, 9.6%), comparing participants with severe depressive symptoms to those with mild or no depressive symptoms. this longitudinal effect was only slightly weaker than the corresponding cross-sectional association (rd = 6.2%, 95% ci 1.4%, 11.0%) found in the analysis that did not enforce temporality. the 95% ci of the longitudinal effect (−1.7%, 9.6%) shows that a risk difference ranging from a 1.7 percentage point decrease, a small negative association, to a 9.6 percentage point increase, a substantial positive association, is compatible with the data. given that the overall risk of injection equipment sharing was 10% across follow-up visits, the point estimate of a 3.9% point increase is substantively meaningful. previous research has suggested a possible non-linear relationship between the severity of depressive symptoms and occurrence of sexual risk behaviors, although this literature has focused on msm, not pwid, and findings have been mixed. some studies have found that mild depressive symptoms are associated with higher levels of sexual risk behavior but decreasing risk with severe depressive symptoms [25, 36] ; others have observed increasing risk with increasing severity of depressive symptoms [26] [27] [28] . in contrast, our analysis of condomless sex according to three levels of depressive symptoms suggested slight decreases in condomless sex with increasing severity of depressive symptoms, consistent with our main analysis. participants with depressive symptoms -regardless of severity -may be experiencing fatigue, social isolation, and loss of interest in sex, thereby reducing the risk of engaging in this behavior [62] . although all participants reported sex in the 6 months prior to baseline (due to trial eligibility criteria), a loss of interest in sex over 24 months of follow-up, particularly among participants with depressive symptoms, may have contributed to our findings. in contrast to condomless sex, we observed possible nonlinearities in the relationship between depressive symptom severity and risk of sharing injection equipment, which have not been observed previously. prior studies have found an increasing risk of injecting risk behavior with increasing depressive symptom severity [38] or have not differentiated between mild and severe symptoms [35, 37, 39] . we found monotonically increasing risk with increasing depressive symptoms in our cross-sectional analysis, and a u-shaped risk in our longitudinal analysis, where those with mild depressive symptoms had the lowest risk. interestingly, the u-shaped relationship we observed for longitudinal injecting risk is the inverse of some previous findings on sexual risk among msm (where those with mild depressive symptoms had the highest risk) [36] . this may be due to mild depressive symptoms manifesting differently for injecting behavior compared to sexual behavior and inherent differences between pwid and msm populations. depressive symptoms could lead to cognitive distortions, maladaptive coping, and loss of risk aversion [63] [64] [65] , and such symptoms may need to become severe in order to be expressed behaviorally as increased frequency of injection drug use (to treat severe symptoms) and consequently, greater sharing of equipment. although various relationships between depression and hiv transmission risk behaviors have been studied previously, the unique contributions of this study are its focus on pwid living with hiv, a population for whom there is limited data on depression and risk behaviors, and its methodological rigor in inferring causality rather than correlation. our modeling approach controlled time-varying confounding and incorporated the episodic nature of depressive symptoms by using longitudinal data from five study visits over 2 years. given that the longitudinal effect enforced temporal ordering of depressive symptoms prior to risk behaviors, we believe that it more closely reflects the causal effect than does the cross-sectional association. however, it is important to consider the trade-off between temporal ordering and etiologic relevance in the context of data limitations particular to this study. separating the measurement of depressive symptoms and risk behaviors by 6 months (with a 3-month "blackout period" in between) was necessitated by the parent trial's data structure. this incomplete interval coverage could have attenuated effect estimates relative to what they might have been if the entire interval were included (that is, if depressive symptoms were more likely to influence risk behaviors in the first 3 months of the follow-up interval). a shorter time interval with more complete data coverage may allow better capture of the effect of episodic depressive symptoms on subsequent risk behavior. inferring causality relies on several key assumptions, which must be evaluated carefully in light of the limitations of this observational study [53, 54] . the assumption of no unmeasured confounding holds that there are no systematic differences between participants with and without depression beyond any differences in variables controlled for in the analysis. although we controlled for a variety of confounders, it is possible that unmeasured confounding biased estimates of the effect of depression on risk behaviors. we also assumed positivity (i.e., participants with and without depressive symptoms were in all confounder-defined subsets of the population) and that models were correctly specified without measurement error for covariates. importantly, this study's ascertainment of depression relied on ces-d score categories, and the ces-d is not diagnostic of clinical depression. however, we used a conservative cut-point for severe depressive symptoms with high reliability and validity [50, 51] . there may also have been under-ascertainment of risk behaviors due to social desirability and recall bias. however, participants reported high levels of drug use and had been recruited by former drug users (aware of their injection drug use), indicating a willingness to disclose sensitive behaviors. finally, the consistency assumption holds that there is no meaningful variability in treatment relevant to its effect on the outcome. here, we did not model a specific treatment on depression, and results should only be interpreted as the hypothetical effect of eliminating severe depressive symptoms without specifying the treatment used for elimination. our conclusions are specific to this study population, which was not randomly sampled and may not be representative of all pwid living with hiv. while men who inject drugs drive the hiv epidemic in vietnam, our findings may not be applicable to other groups, such as women or pwid in other regions. however, our findings may be broadly generalizable to other asian and european countries where the hiv epidemic is concentrated among similar groups. we also note that the sample size of this hard-to-reach population was relatively small, which limited our statistical power to detect small differences in risk between depression groups. importantly, the risk behavior outcome in our study does not allow direct prediction of forward hiv transmission risk, as we did not take into account viral suppression status, the frequency of risk acts, or partner susceptibility to hiv. these determinants of transmission will be incorporated into a future mathematical modeling analysis that will explicitly estimate forward transmission events from this study population. we found that severe depressive symptoms may perpetuate the risk of sharing injection equipment among pwid living with hiv in vietnam. during the study period (2009) (2010) (2011) (2012) (2013) , there was very limited access to mental health services for people living with depression in vietnam [66] . however, in recent years, mental health services have become a national health priority, and there is growing attention and funding for increasing local services and availability of depression treatment [66, 67] . screening and treating depressive symptoms among pwid presents an opportunity not only to improve mental health and drug abuse outcomes but also to reduce behaviors associated with hiv transmission risk. funding doctoral training support for sara n. levintow was provided by nida (r36 da045569), niaid (t32 ai070114-10), and viiv healthcare (pre-doctoral fellowship). the parent trial for this study was funded by nida (r01 da022962-01). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. this content is solely the responsibility of the authors and does not necessarily represent the official views of the national institutes of health. conflicts of interest none. ethical approval this research was approved by the ethical review committees at the thai nguyen center for preventive medicine, the johns hopkins bloomberg school of public health, and the university of north carolina at chapel hill gillings school of global public health. all procedures performed were in accordance with the 1964 helsinki declaration and its later amendments or comparable ethical standards. informed consent written informed consent was obtained from all participants. global epidemiology of injecting drug use and hiv among people who inject drugs: a systematic review hiv prevention, treatment, and care services for people who inject drugs: a systematic review of global, regional, and national coverage the global hiv epidemics among people who inject drugs the hiv epidemic in eastern europe and central asia drug use as a driver of hiv risks estimating per-act hiv transmission risk a probability model for estimating the force of transmission of hiv infection and its application scaling up hiv prevention efforts targeting people who inject drugs in central asia: a review of key challenges and ways forward global, regional, and country-level coverage of interventions to prevent and manage hiv and hepatitis c among people who inject drugs: a systematic review the perfect storm: incarceration and the high-risk environment perpetuating transmission of hiv, hepatitis c virus, and tuberculosis in eastern europe and central asia prevalence of depressive symptoms and associated factors among people who inject drugs in china factors associated with symptoms of depression among injection drug users receiving antiretroviral treatment in indonesia depression and clinical progression in hiv-infected drug users treated with highly active antiretroviral therapy frequency of and risk factors for depression among participants in the swiss hiv cohort study (shcs) prevalence and predictors of depressive symptoms among hiv-positive men who inject drugs in vietnam longitudinal predictors of depressive symptoms among low income injection drug users depression as an antecedent of frequency of intravenous drug use in an urban, nontreatment sample depression severity and drug injection hiv risk behaviors interrelation between psychiatric disorders and the prevention and treatment of hiv infection psychiatric disorders and drug use among human immunodeficiency virus-infected adults in the united states role of depression, stress, and trauma in hiv disease progression depression in hiv infected patients: a review psychiatric illness and virologic response in patients initiating highly active antiretroviral therapy mortality under plausible interventions on antiretroviral treatment and depression in hivinfected women: an application of the parametric g-formula risk factors for hiv infection among men who have sex with men depression, compulsive sexual behavior, and sexual risk-taking among urban young gay and bisexual men: the p18 cohort study depression and oral ftc/tdf pre-exposure prophylaxis (prep) among men and transgender women who have sex with men (msm/tgw) depression, substance use and hiv risk in a probability sample of men who have sex with men a pilot study examining depressive symptoms, internet use, and sexual risk behaviour among asian men who have sex with men mortality and hiv transmission among male vietnamese injection drug users regional differences between people who inject drugs in an hiv prevention trial integrating treatment and prevention (hptn 074): a baseline analysis are negative affective states associated with hiv sexual risk behaviors? a meta-analytic review health psychol correlates of depression among hiv-positive women and men who inject drugs depression and sexual risk behaviours among people who inject drugs: a gender-based analysis people who inject drugs and have mood disorders-a brief assessment of health risk behaviors moderate levels of depression predict sexual transmission risk in hiv-infected msm: a longitudinal analysis of data from six sites involved in a prevention for positives study psychiatric correlates of injection risk behavior among young people who inject drugs association of depression, anxiety, and suicidal ideation with high-risk behaviors among men who inject drugs in delhi intimate relationships and patterns of drug and sexual risk behaviors among people who inject drugs in kazakhstan: a latent class analysis associations of depression and anxiety symptoms with sexual behaviour in women and heterosexual men attending sexual health clinics: a cross-sectional study the control of confounding by intermediate variables measuring depression over time or not? lack of unidimensionality and longitudinal measurement invariance in four common rating scales of depression marginal structural models and causal inference in epidemiology effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome or death using marginal structural models marginal structural models for analyzing causal effects of time-dependent treatments: an application in perinatal epidemiology efficacy of a multi-level intervention to reduce injecting and sexual risk behaviors among hiv-infected people who inject drugs in vietnam: a four-arm randomized controlled trial ministry of health of vietnam. results from the hiv/sti integrated biological and behavioral surveillance (ibbs) in vietnam, round ii socialist republic of viet nam. vietnam aids response progress report 2014, following up the 2011 political declaration on hiv/ aids, reporting period thai nguyen provincial aids center and the division of social evils control and prevention, department of labor a self-report depression scale for research in the general population screening value of the center for epidemiologic studies-depression scale among people living with hiv/aids in ho chi minh city, vietnam: a validation study changes in depressive symptoms and correlates in hiv people at an hoa clinic in ho chi minh city vietnam constructing inverse probability weights for marginal structural models estimating causal effects from epidemiological data the r package geepack for generalized estimating equations estimating equations for association structures yet another package for generalized estimating equations. r-news multiple imputation for nonresponse in surveys mice: multivariate imputation by chained equations in r scientists rise up against statistical significance r: a language and environment for statistical computing depressive symptoms, social support, and personal health behaviors in young men and women sex, drugs and escape: a psychological model of hiv-risk sexual behaviours co-occurrence of treatment nonadherence and continued hiv transmission risk behaviors: implications for positive prevention interventions testing a social-cognitive model of hiv transmission risk behaviors in hiv-infected msm with and without depression mental health in vietnam: burden of disease and availability of services barriers and facilitators to the integration of depression services in primary care in vietnam: a mixed methods study key: cord-012932-alxtoaq9 authors: smerecnik, chris m. r.; mesters, ilse; verweij, eline; de vries, nanne k.; de vries, hein title: a systematic review of the impact of genetic counseling on risk perception accuracy date: 2009-06-01 journal: j genet couns doi: 10.1007/s10897-008-9210-z sha: doc_id: 12932 cord_uid: alxtoaq9 this review presents an overview of the impact of genetic counseling on risk perception accuracy in papers published between january 2000 and february 2007. the results suggest that genetic counseling may have a positive impact on risk perception accuracy, though some studies observed no impact at all, or only for low-risk participants. several implications for future research can be deduced. first, future researchers should link risk perception changes to objective risk estimates, define risk perception accuracy as the correct counseled risk estimate, and report both the proportion of individuals who correctly estimate their risk and the average overestimation of the risk. second, as the descriptions of the counseling sessions were generally poor, future research should include more detailed description of these sessions and link their content to risk perception outcomes to allow interpretation of the results. finally, the effect of genetic counseling should be examined for a wider variety of hereditary conditions. genetic counselors should provide the necessary context in which counselees can understand risk information, use both verbal and numerical risk estimates to communicate personal risk information, and use visual aids when communicating numerical risk information. recent advances in genetic research have enabled us to identify individuals at risk for a wide variety of medical conditions due to their genetic makeup (collins et al. 2003) . at the same time, these advances have created the need to educate and guide these individuals (lerman et al. 2002) . informing them of their hereditary risk and of the options for how to deal with this risk is the primary aim of genetic services (wang et al. 2004) . genetic services involve both genetic counseling and genetic testing; of these, genetic counseling in particular aims to enable at-risk individuals to accurately identify, understand and adaptively cope with their genetic risk (biesecker 2001; pilnick & dingwall 2001) . the national society of genetic counselors' (nsgc) task force defines genetic counseling as "the process of helping people understand and adapt to medical, psychological, and familial applications of genetic contributions to disease" (resta et al. 2006, p. 79) . as such, genetic counselors are faced with three important tasks: (1) to interpret family and medical histories to enable risk assessment, (2) to educate counselees about issues related to heredity, preventive options (e.g., genetic testing), and personal risk, and (3) to facilitate informed decisions and adaptation to personal risk (cf. trepanier et al. 2004) . the latter task may be considered the "core" (i.e., the desired outcome) of genetic counseling, with the former tasks in service of its fulfillment. informed decision making and adaptation to personal risk, however, are abstract concepts that cannot easily be assessed. as such, several measures have been developed to assess the efficacy of genetic counseling. kasparian, wakefield and meiser (2007) summarized 23 available measurement scales which include satisfaction, knowledge, psychological adjustment, and risk perception measures. although each of these measures significantly contributes to our understanding of the effect of genetic counseling, risk perception measures (and especially risk perception accuracy) may be regarded as one central concept. indeed, several influential models of health behavior, such as the health belief model (janz & becker 1984) , the protection motivation theory (rogers 1983) , and the extended parallel process model (witte 1992) , posit that adequate risk perception acts as a motivator to take (preventive) action and, as such, is a prerequisite of preventive behavior. moreover, risk perception and risk perception accuracy have been shown to be related to several other important outcomes of genetic counseling, such as coping (nordin et al. 2002) , worry (hopwood et al. 2001) , and anxiety . the effect of genetic counseling on risk perception has been heavily examined during the past two decades, from early research into reproductive genetic counseling (e.g., humphreys & berkeley 1987) to recent studies into genetic predispositions to cancer (e.g., bjorvatn et al. 2007) . while these studies are valuable in their own right, few have investigated the effect of genetic counseling on risk perception accuracy. indeed, to facilitate informed decision making and adaptation to personal risk, counselees must have accurate risk perceptions. in their 2002 meta-analysis, meiser and halliday (2002) identified only six studies that assessed the effects of genetic counseling on risk perception accuracy. their meta-analysis showed that individuals at risk for breast cancer significantly perceive their own risk more accurately after genetic counseling. in particular, they observed an average increase of 24.3% of the participants who accurately estimated their personal risk after counseling. a systematic review by butow and colleagues (2003) 1 year later confirmed the positive impact of genetic counseling in breast cancer risk perception accuracy, although 22-50% continued to overestimate their risk even after counseling. research thus suggests that genetic counseling may indeed improve risk perception accuracy in some individuals. however, meiser and halliday (2002) and butow et al. (2003) only included studies examining breast cancer risk. to date, there is no systematic review or metaanalysis which examines the effect of genetic counseling on perception of genetic risks in general. thus, the purpose of the present review is twofold: (1) to provide an updated overview of the impact of genetic counseling on risk perception accuracy in papers published between january 2000 and february 2007, and (2) to extend the results of meiser and halliday's (2002) meta-analysis and butow et al.'s (2003) systematic review to other genetic conditions. we searched the pubmed, embase, web of science, eric and psycinfo databases. we also used the search engine google scholar to find papers and grey literature (literature not published in a journal-e.g., in press or under review-but nevertheless available on the internet) on risk perception accuracy and genetic counseling on the internet. to this end, we used the search term "(risk perception or perceived risk or perceived susceptibility or susceptibility estimate or risk estimate) and (genetic counsel* or genetic risk or familial risk or genetic predisposition)." if available in the databases, we used the standardized, subject-related indexing terms of the concepts in the search term. we also searched the following journals manually: journal of genetic the selection procedure was performed independently by two reviewers. the review process then consisted of three phases. during the first phase, papers were reviewed based on title only. in the second phase, the reviewers examined the abstracts of papers that could not be definitively included or excluded based on their title. papers thought to be relevant to the review based on their abstracts were included; those judged irrelevant were excluded. in the third phase, the reviewers examined the papers included during the previous two phases for content. as recommended by the cochrane guidelines (higgins & green 2006) , we erred on the safe side during the whole selection process; if in doubt, we included the paper for more extensive review in the subsequent phase. the following inclusion and exclusion criteria were used to determine whether papers were eligible for the review. 1. studies should be published after 2000 (i.e., upper limit of the 2002 meiser and halliday meta-analysis, since one goal of this review was to provide an update of that analysis); studies published before 2000 were excluded (n=8; e.g., evans et al. 1994 ). 2. studies should focus on genetic risk perception; studies which did not (n=9; e.g., clementi et al. 2006) or which discussed the effect of genetic mutations, prevalence, incidence, morbidity, or mortality only were excluded (n=0). 3. studies should examine the effect of genetic counseling on risk perception accuracy; that is, should explicitly link perceived risk to objective risk estimates to examine whether they more closely align after (rather than before) counseling. studies were excluded if they examined changes in risk perception without linking them to some objective risk estimate (n=19; e.g., burke et al. 2000) , if they investigated risk perception as a determinant of genetic counseling participation (n=6; e.g., collins et al. 2000) , or if they focused on the effectiveness of decision aids as compared to standard genetic counseling (n=3; e.g., warner et al. 2003) . 4. to accurately assess whether genetic counseling affected risk perception accuracy, studies should employ either a prospective or a randomized control trial design. studies using other designs were excluded (n=12; e.g., cull et al. 2001 ). 5. risk perception accuracy should be assessed as a quantitative outcome measure; studies were excluded if they assessed risk perception as a qualitative outcome measure (n=0). 6. studies should focus on at-risk individuals; those focusing on intermediaries (e.g., genetic counselors, nurses) would be excluded (n=0). 7. studies should describe original research published in peer-reviewed journal in english. studies describing secondary data or reviewing other studies, editorials, commentaries, book reviews, bibliographies, resources or policy documents were excluded (n=5; e.g., palmero et al. 2004 ) as they provided too little detail. risk perception outcomes were abstracted by two authors independently, using standardized extraction forms. in the event of disagreement, the authors discussed the particular paper until they reached consensus. we abstracted the characteristics of the study, the participants and the genetic counseling session, as well as the results and quality of the study (cf. higgins & green 2006) . figure 1 presents the flowchart of the study selection process. from the initial sample of 3,798 eligible papers from the database searches and the 62 unique papers from the google scholar, journal, reference list and key author searches, a total of 82 papers were eligible for extensive review. of these, 19 papers were included in the review. table 1 lists the included papers and information about the study design, genetic counseling session content, criteria for risk perception accuracy, measurement time points, and finally the risk perception outcomes. given the heterogeneity in the studies, we decided against pooling the studies in a meta-analysis. concerning the content and quality of the genetic counseling sessions, four studies mentioned using a genetic counseling protocol (bjorvatn et al. 2007; bowen et al. 2006; kaiser et al. 2004; van dijk et al. 2003) . two mentioned using a standardized counseling script (codori et al. 2005; tercyak et al. 2001 ). an additional three used audiotapes as a content check of the counseling session (hopwood et al. 2003; kelly et al. 2003; , while the remaining twelve did not mention the use of any protocol, standardized script or audio-or videotapes as a content check. in-depth analyses of the content (see table 1 ) revealed that a majority of the studies described counseling sessions with similar content. however, four studies did not provide a description of the counseling session at all (hopwood et al. 2004; huiart et al. 2002; lidén et al. 2003; nordin et al. 2002) . comparing the descriptions of the counseling sessions of the remaining fifteen studies to the recommendations of the nsgc task force, we observed that only six of these mentioned the first task, "interpretation of family and medical histories to enable risk assessment" (bjorvatn et al. 2007; bowen et al. 2006; hopwood et al. 2003; kelly et al. 2003; pieterse et al. 2006; tercyak et al. 2001; van dijk et al. 2003) . likewise, only five studies explicitly mentioned performing the second task, "educate counselees about issues related to heredity and treatment and preventive options" (bjorvatn et al. 2007; codori et al. 2005; kelly et al. 2003; van dijk et al. 2003) . although judging whether counselors "facilitated decision making an adaptation to personal risk" is difficult, we did observe six studies claiming to advise counselees on surveillance (bjorvatn et al. 2007; kaiser et al. 2004; rimes et al. 2006; rothemund et al. 2001; tercyak et al. 2001) , which may be regarded as facilitating informed decisions. the included studies used two different types of measures to determine the effect of genetic counseling on risk perception accuracy: several studies reported changes in the proportion of individuals who accurately perceive their risk, while others reported the degree of overestimation or underestimation as a measure of risk perception accuracy. where available, we report both types of measures (see table 1 ). overall, the studies indicate that genetic counseling has a positive impact on risk perception accuracy (cf . table 1) . however, some studies observed no effect on risk perception accuracy at all, or only for low-risk individuals (cf. table 1 ). the studies assessing the proportion of individuals who accurately estimated their risk (see table 1 , subsection i) showed an average increase of approximately 25% (range: 2-55%) of counselees who correctly estimated their risk after counseling; from an average of 42% pre-counseling to an average of 58% post-counseling. however, on average 25% (range: 5-76%) continued to overestimate and 19.5% (range: 7-55%) continued to underestimate their risk even after counseling. 1 other studies which assessed changes in the average overestimation of participants' perceived risk (see table 1 , subsection ii) still observed an average overestimation of approximately 18% (range: 6-40%) after counseling, in comparison with 25% (range: 11.5-42%) before counseling. across the studies, the average decrease in overestimation was approximately 8%. 2 linking the outcome (i.e., risk perception accuracy) to the content of the counseling session (i.e., whether counselors performed the tasks as recommended by the nsgc task force), we observed that the studies in which the counselor gave information about family history and heredity as well as personal risk estimates positively influenced risk perception accuracy (bjorvatn et al. 2007; bowen et al. 2006; hopwood et al. 2003; kelly et al. 2003; tercyak et al. 2001) , although this improvement was not significant in two studies (pieterse et al. 2006; van dijk et al. 2003) . in contrast, the studies that did not mention giving counselees this information observed no significant improvement of risk perception accuracy as a result of genetic counseling (codori et al. 2005; kent et al. 2000; rothemund et al. 2001) , with the exception of one study (kaiser et al. 2004) . the results for the other two tasks were mixed. while some studies that educated counselees about heredity observed a positive impact on risk perception accuracy (bjorvatn et al. 2007; kelly et al. 2003; van dijk et al. 2003) , others did not (codori et al. 2005; . similar results were observed for the third task of facilitating informed decision making and adaptation to personal risk. three out of the six studies identified as performing this task observed a positive impact of genetic counseling on risk perception accuracy (bjorvatn et al. 2007; rimes et al. 2006; tercyak et al. 2001) , while the other three did not (kaiser et al. 2004; rothemund et al. 2001 ). the purposes of this review were (1) to provide an updated overview of the impact of genetic counseling on risk perception accuracy from january 2000 until february 2007, and (2) to extend the meiser and halliday (2002) meta-analysis and the butow et al. (2003) systematic review to other genetic conditions. overall, the studies showed that an increased proportion of individuals correctly perceived their risk after counseling rather than before, and those who did not had smaller deviations from their objective risk than before counseling. these positive effects were sustained even at follow-up 1 year later. some studies, however, observed no positive effect of genetic counseling, or only for low-risk individuals. these results are in line with those reported in the 2002 meiser and halliday metaanalysis and the 2003 systematic review conducted by butow and colleagues. the research in the present review may shed some light on why some studies observe positive effects of genetic counseling on risk perception accuracy and others do not. first, one study (codori et al. 2005 ) that observed no effect explicitly mentioned that personal risk information was not communicated during the relevant counseling session. second, the provision of information about the role of family history, as recommended by the nsgc task force, may provide an appropriate context in which counselees can make sense of the risk information (cf. codori et al. 2005) , resulting in accurate risk perceptions. third, some counselors may go to great lengths to explain risk information in terms the counselees can understand (cf. kent et al. 2000) . unfortunately, research has shown that verbal and numerical risk estimates often do not coincide. that is, verbal risk information results in more variability in risk perception than does numerical information (gurmankin et al. 2004b ). bjorvatn et al. (2007) , for example, observed incongruence between numerical and verbal measures of risk perception. similarly, hopwood et al. (2003) observed that counselees included a wide range of numerical risk estimates within the same verbal category. the significance of this is discussed below, where we present the implications of our study for clinical practice. finally, several studies (pieterse et al. 2006; rothemund et al. 2001 ) that observed no effect of genetic counseling on risk perception accuracy had small sample sizes, and thus may not have observed a significant effect due to power limitations. the present review has several important implications for future research. first, we selected a large number of studies assessing risk perception changes as a result of genetic counseling. however, we had to exclude 19 of these studies because they did not explicitly link risk perception to an objective risk figure. assuming that researchers are aware of these objective risk figures, future studies should link risk perception changes to objective risk figures to assess changes in risk perception accuracy. a second implication concerns the definition of risk perception accuracy, which differs between studies. for instance, in several studies accurate risk perception is defined as falling within a certain category (e.g., bjorvatn et al. 2007; kelly et al. 2003; lidén et al. 2003) or within 50% of the counseled risk (e.g., pieterse et al. 2006; rothemund et al. 2001) , while the majority define it as the correct counseled risk estimate (e.g., bowen et al. 2006; hopwood et al. 2003; tercyak et al. 2001 ). additionally, the reviewed studies based the counseled risk estimate on different methods, such as family history assessment (huiart et al. 2002) , gail's score (bowen et al. 2006) , or the brcapro procedure (kelly et al. 2003) . these issues reduce our ability to compare the results of the studies, thereby lessening their value. future researchers should define risk perception accuracy as correct counseled risk, and base their risk estimate on generally accepted and applied methods to allow for better interpretation of the results and comparison between studies. a third, related issue concerns the type of outcome measure used: several studies report changes in the proportion of individuals who correctly perceive their risk, while others report the degree of overestimation or underestimation as a measure of risk perception accuracy. researchers are advised to include both measures in their studies, as both provide valuable information about the effect of genetic counseling on risk perception accuracy. further, we observed that the quality of the genetic counseling descriptions (in those descriptions that were present) was poor. although the counseling sessions were labeled as standardized, they were described in general terms, such as "discussion about the risk" and "information was given about how hereditary factors contribute to disease." these general descriptions leave room for substantial differences between counseling sessions. this is especially problematic given that perceptions of genetic risks before genetic counseling can determine the content of the counseling session (julian reynier et al. 1995) , which tends to alter patient outcomes . differences in the quality of the counseling session content may well explain the fact that not all studies in the present review observed a positive effect on risk perception accuracy. future studies should therefore try to link the content of the counseling session to risk perception to determine which feature of the session actually contributes to improved risk perception accuracy (cf. pieterse et al. 2006 , or shiloh et al. 2006 . the present review provides some insight into how the content of the counseling session relates to risk perception accuracy. indeed, the provision of information on the role of family history was observed to positively impact risk perception accuracy, perhaps because it creates a context in which the counselee can understand the information. additionally, forcing numerical risk estimates to fit lay terms to aid counselees' understanding may lead to inaccurate risk perceptions (kent et al., 2000) . a possible avenue for further research may be to link effectiveness to certain sociodemographic variables. we could then examine the influence of known psychological differences between certain groups, which is a more complex process and should thus occur later in time. by associating these psychological differences to the effectiveness of genetic counseling, we may be able to identify the processes responsible for the positive effect of genetic counseling on risk perception accuracy. knowledge of such processes will enable us to match the session's content to these processes and thus to increase the session's effectiveness. finally, we observed a relative lack of diversity in research on genetic counseling and genetic test result disclosure in terms of the genetic disorder under consideration. although genetic counseling and testing can be effective for a variety of disorders (biesecker 2001; lerman et al. 2002; pilnick & dingwall 2001) , most recent studies focus on their impact on cancer risk perception, particularly breast cancer. although genetic counseling on cancer has been shown to positively affect risk perception accuracy, this does not guarantee it will do the same for other genetic conditions. extensive research is needed to assess whether genetic counseling also effectively enhances risk perceptions for other genetic predispositions. based on the results, we have formulated some implications for practice. first, in accordance with the recommendations of the nsgc task force, we again strongly urge genetic counselors to discuss the role of family history and perform a family history assessment. we suggest that this information is an important factor in accurate risk perception because it may provide the necessary context in which counselees can understand the risk information. indeed, the results seem to suggest that the provision of such information is positively related to risk perception accuracy. while this implication may seem redundant as it repeats the earlier recommendations by the nsgc task force, we nonetheless repeat it here since several studies in this review did not mention communicating this information to the counselee (codori et al. 2005; kaiser et al. 2004; kent et al. 2000; rothemund et al. 2001) . second, while explaining risk information in lay terms seems to be a useful strategy to help counselees to better understand their risk (cf. trepanier et al. 2004) , the one study that explicitly mentioned doing so did not observe a significant effect on risk perception accuracy (kent et al. 2000) . moreover, there appears to be incongruency between verbal and numerical risk estimates (e.g., bjorvatn et al. 2007; hopwood et al. 2003) . both types of risk estimates, however, possess qualities that would make them especially suited for counseling. compared to verbal risk estimates, numerical risk estimates have been shown to increase trust in (gurmankin et al. 2004a ) and satisfaction with (berry et al. 2004 ) the information. on the other hand, individuals have been shown to more readily use verbal information when describing their risk to others (erev & cohen 1990) and when deciding on treatment (teigen & brun 2003) . we therefore advise genetic counselors to present numerical risk estimates first, as they are accurate, objective information. the patient may then be asked what that risk estimate means to him or her. the patient's verbal response will provide an opportunity for further discussion of the meaning and impact of the risk information. genetic counselors should, however, be aware of the disadvantages of verbal information in accurately communicating risk information. a third, related implication concerns the presentation of numerical risk information. research has shown that visual presentation of risk information (e.g., odds or percentages) may be better understood than written presentation formats. indeed, there seems to be general agreement that graphical formats, in comparison with textual information, are better able to accurately communicate risk information (schapira et al. 2001; timmermans et al. 2004 ) although contradictory evidence has also been published (parrot et al. 2005 ). furthermore, graphical information seems to have a larger impact on risk-avoiding behavior than textual information (chua et al. 2006) . we therefore advise genetic counselors to use visual aids when communicating numerical risk information (cf. tercyak et al. 2001) . overall, this review suggests that genetic counseling may have a positive impact on risk perception accuracy. it has also resulted in several implications for future research. first, future researchers should link risk perception changes to objective risk estimates to assess the effect of genetic counseling on risk perception accuracy. researchers are advised to define risk perception accuracy as the correct counseled risk estimate instead of falling within a certain percentage of the counseled risk. additionally, they should report both the proportion of individuals who correctly estimate their risk and the average overestimation of risk. second, as the descriptions of the counseling sessions were generally poor, future research should include more detailed descriptions of these sessions, and link their content to risk perception outcomes to enable interpretation of the results. finally, the effect of genetic counseling should be examined for a wider variety of hereditary conditions. genetic counselors are advised to discuss the role of family history and perform a family history assessment to provide the necessary context in which counselees can understand the risk information. they should also use both verbal and numerical risk estimates to communicate personal risk information, and use visual aids when communicating numerical risk information. over the counter medicines and the need for immediate action: a further evaluation of european commision recommended wordings for communicating risks goals of genetic counseling risk perception, worry and satisfaction related to genetic counseling for hereditary cancer effects of counseling ashkenazi jewish women about breast cancer risk genetic counseling for women with an intermediate family history of breast cancer psychological outcomes and risk perception after genetic testing and counselling in breast cancer: a systematic review risk avoidance: graphs versus numbers pregnancy outcome after genetic counselling for prenatal diagnosis of unexpected chromosomal anomaly genetic counseling outcomes: perceived risk and distress after counseling for hereditary colorectal cancer a vision for the future of genomics research cancer worries, risk perceptions and associations with interest in dna testing and clinic satisfaction in a familial colorectal cancer clinic cancer risk perceptions and distress among women attending a familial ovarian cancer clinic verbal versus numerical probabilities: efficiency, biases, and the preference paradox the impact of genetic-counseling on risk perception in women with a family history of breast-cancer the effect of numerical statements of risk on trust and comfort with hypothetical physician risk communication intended message versus message received in hypothetical physician risk communications: exploring the gap cochrane handbook for systematic reviews of interventions 4.2.6 do women understand the odds? risk perceptions and recall of risk information in women with a family history of breast cancer risk perception and cancer worry: an exploratory study of the impact of genetic risk counselling in women with a family history of breast cancer a randomised comparison of uk genetic risk counselling services for familial cancer: psychosocial outcomes effects of genetic consultation on perception of a family risk of breast/ovarian cancer and determinants of inaccurate perception after the consultation representing risks: supporting genetic counseling the health belief model: a decade later risk perception, anxiety and attitudes towards predictive testing alter cancer genetic consultations psychological responses to prenatal nts counseling and the uptake of invasive testing in women of advanced maternal age assessment of psychological outcomes in genetic counseling research subjective and objective risks of carrying a brca 1/2 mutation in individuals of ashkenazi jewish descent the relationship between perceived risk, thought intrusiveness and emotional well-being in women receiving counselling for breast cancer risk in a family history clinic genetic testing: psychological aspects and implications genetic counselling for cancer and risk perception communication and information-giving in highrisk breast cancer consultations: influence on patient outcomes communication and information-giving in high-risk breast cancer consultations: influence on patient outcomes risk perceptions and knowledge of breast cancer genetics in women at increased risk of developing hereditary breast cancer long-term outcomes of genetic counseling in women at increased risk of developing hereditary breast cancer what is the impact of genetic counseling in women at increased risk of developing hereditary breast cancer? a meta-analytic review coping style, psychological distress, risk perception, and satisfaction in subjects attending genetic counselling for hereditary cancer genetic counseling and cancer risk perception in brazilian patients at-risk for hereditary breast and ovarian cancer risk comprehension and judgements of statisical evidentiary appeals. when a picture is not worth a thousand words risk communication in completed series of breast cancer genetic counseling visits research directions in genetic counselling: a review of the literature a new definition of genetic counseling: notional society of genetic counselors' task force report applying cognitive-behavioral models of health anxiety in a cancer genetics service cognitive and physiological processes in fear appeals and attitude change: a revised theory of protection motivation perception of risk, anxiety, and health behaviors in women at high risk for breast cancer frequency or probability? a qualitative study of risk communication formats used in health care the facilitating role of information provided in genetic counseling for counselees' decisions verbal probabilities: a question of frame psychological response to prenatal genetic counseling and amniocentesis different formats for communicating surgical risks to patients and the effect on choice of treatment genetic counselling and the intention to undergo prophylactic mastectomy: effects of a breast cancer risk assessment assessment of genetic testing and related counseling services: current research and future directions educating women about breast cancer. an intervention for women with a family history of breast cancer putting the fear back into fear appeals: the extended parallel process model (eppm) acknowledgements this study was financially supported by maastricht university and performed at the school for public health and primary care (caphri). caphri participates in the netherlands school of primary care research (care), recognized by the royal dutch academy of science (knaw) in 1995.open access this article is distributed under the terms of the creative commons attribution noncommercial license (https:// doi.org/creativecommons.org/licenses/by-nc/2.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. key: cord-035287-l6trtvil authors: kanno, takeshi; moayyedi, paul title: who needs gastroprotection in 2020? date: 2020-11-11 journal: curr treat options gastroenterol doi: 10.1007/s11938-020-00316-9 sha: doc_id: 35287 cord_uid: l6trtvil purpose of review: peptic ulcer disease (pud) is a recognized complication of non-steroidal anti-inflammatory drugs (nsaids). stress ulcers are a concern for intensive care unit (icu) patients; pud is also an issue for patients taking anticoagulation. helicobacter pylori test and treat is an option for patients starting nsaid therapy, and proton pump inhibitors (ppis) may reduce pud in nsaid patients and other high-risk groups. recent findings: there are a large number of trials that demonstrate that helicobacter pylori eradication reduces pud in nsaid patients. ppi is also effective at reducing pud in this group and is also effective in icu patients and those on anticoagulants. the effect is too modest for ppi to be recommended in everyone, and more research is needed as to which groups would benefit the most. increasing age, past history of pud, and comorbidity are the most important risk factors. summary: h. pylori test and treat should be offered to older patients starting nsaids, while ppis should be prescribed to patients that are at high risk of developing pud and at risk of dying from pud complications. upper gastrointestinal (gi) bleeding is a major health problem, and mortality from this problem has remained relatively unchanged for the last 50 years [1] [2] [3] . the apparent stability of a 5-12% in-patient 30-day mortality rate hides significant changes in the epidemiology and management of the condition. major advances have been made in the management of upper gastrointestinal bleeding including the routine use of proton pump inhibitor therapy after a peptic ulcer bleed which improves outcomes and probably reduces mortality [4] . endoscopic therapy also improves the outcomes of peptic ulcer and variceal bleeding [5] . the age-adjusted rates of peptic ulcer (pu) bleeding have fallen globally over the last 20 years largely due to the falling prevalence of helicobacter pylori (h. pylori) [6, 7] , but a modest contribution may relate to the increasing use of acid suppression in the community [8] . these positive factors have been balanced by the fluctuating use of non-steroidal anti-inflammatory drug (nsaid) [9] and by the increased use of antiplatelet [10] and anticoagulant therapy [11] over time. furthermore, the absolute numbers of patients with peptic ulcer bleeding are not falling as dramatically as might be expected due to populations living longer with more comorbidities, which are a major risk factor for both pu bleeding incidence [12] and death [13] . given that pu bleeding remains an important problem, it is helpful to develop strategies that will prevent this complication particularly as antiplatelet and anticoagulation therapy continue to rise [14] . there have been recent guidelines [15, 16] on nonvariceal upper gastrointestinal bleeding, but these have predominantly focused on management of the problem when it occurs rather than preventing the complication from happening in the first place. the main approaches to prevent peptic ulcer bleeding are h. pylori screening and treating those that are positive, long-term proton pump inhibitor (ppi), or h 2 receptor antagonists (h2ra) therapy. h2ra therapy is less effective than ppi [17] and will not be considered further in this review. in those taking nsaids, there are the additional approaches of replacing them with cyclooxygenase-2 (cox-2) inhibitors or adding prostaglandin analogues. none of these strategies will be cost-effective if used in the general population, and most guidelines would recommend that these interventions should only be used in highrisk groups [18] . this article will therefore evaluate risk factors for pu complications such as age, nsaid use, concomitant antiplatelet therapy, anticoagulant therapy, patients admitted to intensive care, and those with severe comorbidities [19] . we will then summarize the evidence for the efficacy of h. pylori eradication, ppi therapy, cox-2 inhibition, and prostaglandin analogues in preventing peptic ulcer bleeding and focus on which high-risk groups these approaches could be recommended. the most important determinant of pu complication population attributable to risk is increasing age. the risk of pu complication is 10-fold higher than those over the age of 60 years compared to younger age groups [20] . the vast majority of deaths from pu complications also occur in older age groups with a 50-fold increase in mortality in those over 60 compared to those less than 60 years old [20] . while mortality from pu complications in those under that age of 60 years is very rare, this cut off is somewhat arbitrary. the risk of pu complications is still modest in a 60-year-old but steadily increases with advancing age with a roughly two-fold increase in incidence with every decade [21] . many risk factors increase with age, and it is difficult to evaluate age separate from other risk factors such as increasing prevalence of h. pylori, polypharmacy, and comorbidity. nevertheless, it is likely that age is an important independent risk factor for pu complications. the message for the clinician is that gastroprotection is unlikely to be cost-effective in younger age groups and should mainly be considered in those over the age of 60 years. in those over the age of 60, the threshold to offer gastroprotection should decrease as age increases with a particular consideration given to those over the age of 80 years [21] . the potential for nsaids to cause peptic ulcer disease is well known. the analgesic effect of nsaids is mediated through reducing prostaglandin synthesis by inhibiting cyclooxygenase (cox) enzymes. there are two cox isoenzymes; cox-1 is present in most cells whereas cox-2 is present in only a few tissues and is induced by inflammation [22] . the gastrointestinal toxicity of nsaids is mediated by cox-1, and the reduction in gi prostaglandin caused by this isoform leads to loss of cytoprotection and increased risk of peptic ulceration. all traditional nsaids have a mixture of cox-1 and cox-2 inhibitor activity, but the proportions differ, and this is the main reason their gastrointestinal toxicity also varies [23] . the least toxics are ibuprofen and diclofenac with relative risks (rr) of around two followed by naproxen with rr of four, and the most toxics are piroxicam and ketoprofen with rr of 8 for the development of peptic ulcer disease [24, 25] . low-dose acetylsalicylic acid (asa) also has an increased risk of peptic ulcer complications with a rr of approximately 1.5 [26] . a modeling study [27] from rct and cohort study data suggested that 1:1200 to 1:2000 chronic nsaid users will die from peptic ulcer complications attributable to the drug. adenosine diphosphate-receptor inhibitors such as clopidogrel are typically used after acute coronary syndromes and following percutaneous coronary stenting as they reduce the risk of future coronary events at least over the next year [28] . the seminal study [28] that reported the benefit of dual antiplatelet therapy with clopidogrel and asa in acute coronary syndromes also found that 1.3% developed gi bleeding over the next 9 months. the pathways that asa causes gastrointestinal mucosal damage are well described, as with all nsaids, but the mechanism by which antiplatelet therapy leads to peptic ulcer bleeding is less clear. inhibition of platelet activity in a peptic ulcer that is already hemorrhaging will aggravate the problem and may lead to more peptic ulcers presenting with bleeding that would otherwise have remained "silent." plateletderived growth factors promote angiogenesis, and this is important in ulcer healing [29] . disruption of these growth factors by clopidogrel may impair peptic ulcer healing and lead to more complications. a population-based cohort [30] estimated the number needed to harm ranged between 30 and 60 for a gastrointestinal hemorrhage within the first 12 months of clopidogrel compared to those not taking this drug. this excess could be related to bias and confounding factors inherent with database studies, but a systematic review [26] of rcts supported this finding, and patients on dual antiplatelet therapy had almost twice the rate of gastrointestinal bleeding compared to those taking asa alone. anticoagulants are commonly used to prevent thromboembolic events in patients with venous thromboembolism, atrial fibrillation, or mechanical heart valves. clinicians and patients are well aware of the risk of bleeding from vitamin k antagonist anticoagulants such as warfarin. the risk of peptic ulcer bleeding is remarkably difficult to quantify as there are no rcts evaluating the risks of warfarin compared to placebo and there are remarkably few robust epidemiological studies. most older studies follow cohorts of patients taking warfarin with no comparator group [31] and suggest a large risk. it is generally believed that traditional views of the gi bleeding risks of warfarin are overestimated [32] and more contemporary assessments of risk support a more modest increased risk [33] . a systematic review [34] of randomized trials comparing anticoagulation with asa in atrial fibrillation found that major bleeding adverse events were more common in the anticoagulation group (or = 1.45; 95% ci = 0.93 to 2.27). this was all bleeding events and was not limited to peptic ulcer bleeding, but if we assume that this also reflects upper gi bleeding and factor in that asa alone also causes an increased risk of bleeding [26] , the overall risk from vitamin k antagonists is approximately increased three-fold. more recently the non-vitamin k antagonist oral anticoagulants (noacs) have been developed and shown to be more efficacious than warfarin in many settings, particularly related to atrial fibrillation [35] . as a result, noacs have overtaken the prescription of vitamin k antagonists for atrial fibrillation and deep vein thrombosis in the usa and several other countries [36] . noacs also cause less intracranial bleeding than vitamin k antagonist but are associated with greater risk of gi bleeding [35] . this meta-analysis of rcts [35] is considering all noacs together, and there are significant differences in risk of gi bleeding between drugs in this class. one database study [37] suggested that apixaban was associated with less gi bleeding than dabigatran or rivaroxaban although another found dabigatran was associated with less upper gi bleeding [38] . interestingly these database studies found similar rates of gi bleeding with noacs compared to vitamin k antagonists. this is in contrast to rct data, and this may relate to confounding factors or may relate to patients outside of rcts having their coagulation less rigorously monitored. the development of noacs has lowered the threshold at which anticoagulation is considered, and they are being used for ever wider indications [39] . this emphasizes the need to offer gastroprotection in patients taking anticoagulation if they are at a high risk of pu bleeding. defining high-risk groups is a challenge but there is rct evidence [39] that adding a naoc to asa doubles the risk compared to asa alone. there is also cohort evidence [40] that adding warfarin to clopidogrel triples the bleeding risk. corticosteroids have a wide range of actions including profound immunomodulatory effects. they are used in a wide range of inflammatory and autoimmune conditions [41] , and their adverse event profiles such as osteoporosis, obesity, mood disorder, diabetes mellitus, and risk of infection are well known. corticosteroids also delay wound healing, so it is logical that they may also inhibit peptic ulcer healing and be associated with increased risk of ulcer complications. clinicians are well aware of this putative risk and often provide patients with ulcer prophylaxis [42] . the rct evidence that they cause peptic ulcer complications is however less clear. a systematic review of rcts [43] did find an approximately 40% increase in risk peptic ulcer bleeding or perforation in those taking corticosteroids. however, the statistically significant effect was only seen in hospitalized patients and with events only occurring in 0.1% of ambulatory patients. these data suggest that the main risk is in patients with other risk factors for peptic ulcer complications, particularly those admitted to hospital, and there is no need to routinely provide gastroprotection to those in the community. the main focus should be on limiting the duration of therapy given the other adverse events related to corticosteroids rather than focusing on gastroprotection. selective serotonin reuptake inhibitors (ssris) are the most commonly prescribed antidepressant [44] and have been advocated for a variety of psychiatric and medical conditions [45] . they have a favorable adverse event profile compared to more traditional antidepressants [46] , but concerns have been raised regarding the risk of gi bleeding [47] . ssris decrease platelet serotonin, and this can result in reduced platelet aggregation [48] . ssris also increase gastric acid production, which could lead to a greater propensity to develop peptic ulceration [49] . an initial uk database study [50] did suggest a threefold increase in gi bleeding in those taking an ssri compared to controls, and this was supported by another cohort study [51] . there have been no rcts evaluating gi bleeding as an outcome, but further observational data has accrued. a systematic review [52] identified 15 case-control studies involving almost 4000 participants and found an increased risk of upper gi bleeding with ssri therapy compared to controls with an odds ratio (or) of 1.66 (95% confidence intervals (ci) = 1.44 to 1.92). the systematic review [52] also identified four cohort studies, and the increased risk was similar (or = 1.68; 95% ci = 1.13 to 2.50). the number needed to harm over 1 year varied between 3177 in a lowrisk dutch population and 881 in a higher risk us population [42] . the systematic review [52] also evaluated the impact of nsaids on the risk of upper gi bleed and found at least an additive effect. the risk of upper gi bleeding in patients taking ssris alone was 1.66, in those taking nsaid alone it was 2.8, and in those taking both drugs the or was 4.25. the number needed to harm for those taking both nsaids and ssris was 645 for a low-risk population and 179 for a higher risk us population [52] . these results could be due to bias or residual confounding as they relate to observational data, but these findings are supported by a hong kong study that attempted to reduce this concern [53] . this study evaluated 3358 ssri users and 57,906 non-users and only included patients that had h. pylori eradication therapy. this approach makes the population more homogeneous, and they further reduced the possibility of confounding by conducting a propensity match analysis. the propensity-matched analysis found patients taking ssri had a hazard ratio of 1.95 (95% ci = 1.41 to 2.71) for developing upper gi bleed compared to non-users [53] . h. pylori is the leading cause of peptic ulcer disease worldwide [54, 55] , and a proportion of both gastric and duodenal ulcers caused by this infection will go on to develop complications. a systematic review of observational studies suggested h. pylori is associated with a two-fold increase in peptic ulcer bleeding [56] . there also appears to be an interaction between nsaids and h. pylori as the same systematic review [56] found an approximately four-fold increased risk of developing peptic ulcer bleeding in those taking nsaids and a 6-fold increase in patients where both factors were present. a further systematic review [57] also found a two-fold increase in upper gastrointestinal bleeding in asa users infected with h. pylori compared to those that were not infected. the number needed to treat varied between 100 and 1000 depending on the underlying risk of peptic ulcer disease in the population [57] . serious comorbidity is associated with peptic ulcer bleeding although definitions of comorbidity vary between studies [7] . patients admitted to the intensive care unit (icu) exemplify the risk facing patients with severe stress and comorbidity with around 3% developing significant gi bleeding [58] , and this is associated with length of stay severity of underlying illness [59] . various scoring systems [60, 61] that evaluate risk of mortality from upper gi bleeding include comorbidity as part of the calculations. a systematic review of death from peptic ulcer bleeding [62] found that mortality was significantly higher in those with comorbidity than those without. in particular those with malignancy had a 6-fold, those with renal disease a 5-fold, and those with hepatic disease a 4-fold increased risk of mortality [62] . respiratory and cardiac disease were each associated with a two-fold risk of dying from peptic ulcer bleed and diabetes mellitus a relative risk of 1.6 [62] . it is important to note that only three of the 16 studies identified in this review were low risk of bias so the quality of the evidence is low, but nevertheless the impact of comorbidity seems to be important, and there is less research on this than many other risk factors for peptic ulcer bleeding. past history of peptic ulcer disease is a strong risk factor for future peptic ulcer although the impact is less strong after successful h. pylori eradication [63] . there is a paucity of data on the risk of developing complicated peptic ulcer in comparison with population-based controls. systematic review data [64] suggest that in patients taking nsaids, a previous history of peptic ulcer increases the risk of future peptic ulcer two to three-fold, and this increases for 4-6 fold for a past history of bleeding peptic ulcer. this is also supported by subgroup analyses of randomized controlled trials [65] . patients with a previous history of peptic ulcer prescribed oral anticoagulants have a doubling of their risk of having a gi bleed over a 10-year follow-up [66] . there are a number of risk factors for developing peptic ulcer disease complications, but the main focus of research has related to preventing nsaid-related peptic ulcer complications. this is understandable as this causes one of the highest increases in risk. the strategies that reduce nsaid-related bleeding are adding ppi therapy, substituting for a cox-2 inhibitor, or adding a prostaglandin analogue. the other approach is screening and treating for h. pylori, and this is the only approach that could be considered for patients other than those taking nsaids. seven days of eradication therapy can heal most patients with h. pylori-positive pud [67] , and treating the infection also dramatically reduces future ulcer recurrence [63] . this also applies to bleeding peptic ulcer as a systematic review of 7 rcts involving 578 infected bleeding patients reported that h. pylori eradication was more effective than anti-secretory therapy in preventing future bleeding recurrence [68] . the recurrence rate was 20% in the anti-secretory group and 3% in the h. pylori eradication group with a number needed to treat of seven. most guidelines [15, 16] therefore recommend testing for h. pylori in those with bleeding pud and treating those infected. randomized controlled trials have shown that population h. pylori screening and treatment reduces the incidence of peptic ulceration in the community [69, 70] . the impact on peptic ulcer complications in the general population is less certain, however, as these events are too rare for randomized trials to be powered to detect an impact on this outcome. the rare events observed in these trials highlight that testing for h. pylori is unlikely to be cost-effective in all groups and any population strategy to screen and treat cannot be instituted on the basis of reduced peptic ulcer complications. population h. pylori screening and treatment is advocated in countries that have a high incidence of gastric adenocarcinoma [71, 72] as systematic reviews of randomized controlled trials have shown that this reduced risk of gastric adenocarcinoma [73, 74] . population h. pylori screening and treatment could increase both the length and quality of life, and it has been estimated that almost 9 million disability-adjusted life years could be gained globally [74, 75] . this estimate just focuses on reduction in gastric cancer, and if prevention of peptic ulcer complications was considered, then the disability life years gained could be even higher. furthermore, a randomized controlled trial has suggested that h. pylori population test and treat could be cost neutral due to the reduction in dyspepsia in the population [76] [77] [78] . guidelines do not support population h. pylori screening and treatment in north america [79] , but the other benefits that could accrue from this approach suggest the clinician should have a low threshold for instituting this strategy when considering patients that may be at risk of developing pu complications. for example, patient taking only low-dose asa who are h. pylori positive may benefit from eradication therapy as one study reported that infected patients who had an asa-related pu bleed given eradication therapy had a similar risk for future bleeding as patients who were asa naã¯ve that had not had a bleed [80] . similarly, a systematic review of rcts [81] reported that patients allocated to h. pylori eradication had an almost 60% reduction in incidence of pud compared to infected nsaid patients in the control group. as this involves one of course of antibiotics for 2 weeks rather than long-term treatment with acid suppressive agents, this could be a very cost-effective approach [82] at preventing nsaid-related ulceration, and guidelines are now recommending this strategy [79, 83] . however, h. pylori eradication is not as effective as ppi therapy in patients on long-term nsaid therapy [81] , so this strategy is not sufficient for some patient groups. nsaids reduce gastric prostaglandin production and loss of mucosal defenses leading to an increased risk of pud [84] . the main reason that mucosal protection is necessary is the highly acid environment of the stomach. blocking acid production should reduce the risk of pud even if there is nsaid-mediated loss of mucosal protection. clinical data support this hypothesis with a systematic review [85] of 18 rcts involving over 10,000 participants demonstrating that ppis reduced pud bleeds by approximately 80% compared to controls although the effect was less marked in patients who were already taking nsaid therapy long term. overall the number needed to treat (nnt) was around 100 in these trials although this was heavily dependent on the underlying risk of the population. ppis also prevented symptomatic and endoscopic ulcers in patients taking nsaids with an nnt of 20 and 5, respectively [85, 86] . a systematic review [87, 88] of 5 rcts involving over 5000 participants also reported that ppis are effective in reducing pu bleeding related to clopidogrel-based antithrombotic therapy. there was a 66% reduction in pu bleed in patients allocated to ppi compared to placebo or famotidine with an nnt of 60 [88] . research on the gastroprotective role of ppi therapy has focused on patients taking nsaid and/or asa. there are a growing number of patients on anticoagulation therapy [36] , and these patients are at increased risk of pu bleed [37] . this was evaluated as part of the cardiovascular outcomes for people using anticoagulant strategies (compass) trial [39] . participants were randomized to rivaroxaban 2.5 mg twice daily with aspirin 100 mg once daily, rivaroxaban 5 mg twice daily alone, or aspirin 100 mg once daily alone to evaluate cardiovascular death, stroke, or myocardial infarction in these groups [39] .this is a 3-by-2 partial factorial rct as those that were not on a ppi were also randomized to pantoprazole 40 mg or placebo [89â�¢â�¢] . a total of 17,598 patients were randomized to the ppi or placebo, and there was no statistically significant difference in the primary outcome of the trial, which was clinically significant upper gi events [89â�¢â�¢] . there was a 50% reduction in the gastroduodenal bleeding in the ppi arm, but events were low and the nnt = 1770 after 3 years of ppi use. the definitions of pud and pu bleeding were very stringent, and this may have resulted in the nnt being so high. a post hoc analysis was therefore conducted relaxing definitions, and this did result in a 50% reduction in bleeding pud bleed, and a similar reduction in uncomplicated pud as well as a 66% reduction in gastric erosions in the ppi group. even when these outcomes were combined, the nnt was still around 500 [89â�¢â�¢] . furthermore, the main benefit of ppi therapy was seen in the asa alone group emphasizing ppis have little impact in patients taking anticoagulants alone. evidence therefore suggests that any benefit of ppis relates to patients taking nsaid or asa. the final group to consider are patients admitted to the icu as these patients are at increased risk of bleeding from upper gastrointestinal stress ulceration [58, 59] . systematic reviews [90, 91] of 19 rcts involving over 2000 icu patients found that ppi therapy reduced overt gi bleeding by 50% with no impact on length of stay, pneumonia, or mortality. ppis were superior to h2ra in these reviews although this is disputed by a network meta-analysis [92â�¢â�¢] of 43 rcts involving over 10,000 patients evaluating clinically important upper gi bleeding as an outcome. this review concluded both ppis and h2ras reduced gi bleeding, and ppis were possibly superior, but the 95% ci were wide (or = 0.58; 95% ci 0.29 to 1.17). this review highlighted that either ppi or h2ra were probably not beneficial in low-risk patients and this intervention should be reserved for those at high risk. the benefits of ppi therapy in preventing pu bleeding should be weighed against the harm of this approach. patients need to take ppi therapy for the duration of risk which may be life-long in the case of asa users. previously this would have been a significant expense, but as most ppis are now available generically in most countries, the costs have reduced significantly. ppi therapy is also very safe in the short term [93] , but concerns have been raised around the long-term adverse effects associated with these drugs [94] . ppis have been associated with pneumonia [95] , bone fracture [96] , enteric infections [97] , cardiovascular events [98] , chronic kidney disease [99] , dementia [100] , gastric cancer [101] , and even all-cause mortality [102] . the list of concerns increases with each passing year, and the latest harms that have been highlighted are an increased risk of renal calculi [103, 104] and risk of covid-19 [105] . the problem with all of these associations is that they are based on observational data, usually related to administrative databases. all of these studies have shown that sicker patients tend to be prescribed ppi therapy and comorbidities are a strong risk factor for developing other diseases [106] . it is possible that being prescribed ppi therapy is a good marker for comorbidity and all of these harms relate to residual confounding [106] . to evaluate this possibility, the rct evaluating ppi in patients taking anticoagulation and/or asa [89â�¢â�¢] described above also prospectively collected information on adverse events [107â�¢] . in over 53,000 patient years of follow up, there was no difference in risk of pneumonia, fracture, chronic kidney disease, dementia, myocardial infarction, gastrointestinal cancers, and all-cause mortality between the ppi and placebo groups [107â�¢ ]. the ppi group had slightly more enteric infections than those taking placebo, but the number needed to harm was over 900 for each year of ppi therapy. this trial followed patient for 3 years, and it is possible that adverse events may take longer to accrue, but there was no divergence on the curves over time in the rct [107â�¢] . furthermore, an rct also found no adverse events in ppi arm compared to surgery in reflux patients over 12 years [108] although this trial was underpowered. finally, there was actually a reduction in mortality in the high-dose ppi arm of a barrett's esophagus trial comparing esomeprazole 20 mg versus 40 mg bid given over a mean of 9 years in over 2500 patients [109] . there are also concerns that ppi may interact with clopidogrel reducing efficacy [110â�¢ ] and this could not be addressed in the compass trial as patients had to discontinue this drug. a systematic review of rcts [87, 88] did not find any difference in cardiovascular events in the ppi arm compared to the placebo/famotidine arms in patients taking clopidogrel suggesting that the results of observational data probably relate to residual confounding. these data suggest that the benefits of ppi therapy outweigh any putative risk provided the appropriate patients are selected for gastroprotection. the gastrointestinal adverse effects of nsaids largely relate to the cox-1 activity of the drug, while the analgesic effects of nsaids relate to cox-2 inhibition. cox-2 selective inhibitors were therefore developed on the principle that these drugs could provide similar analgesic properties to traditional nsaids without the gastrointestinal events [111] . systematic reviews of rcts confirmed this hypothesis with cox-2 inhibitors having a similar efficacy profile [112] but with a 70% reduction in endoscopic ulcers [113] and a 60% reduction in pu bleed and pu complications [113] . cox-2 inhibitors were initially used widely to protect against nsaid-related gi injury, but enthusiasm for this approach waned once it became apparent from rcts [114, 115] that the risk of cardiovascular events was increased from these drugs. a systematic review [116] of 280 rcts comparing nsaids/cox-2 inhibitor with placebo and 474 rcts comparing nsaids with another nsaid/cox-2 inhibitor confirmed that cox-2 inhibitors increased the risk of cardiovascular events by about 33% and this outweighed any benefits in terms of reduction of pu complications. an increase in cardiovascular event risk was also seen with nsaids such as diclofenac and ibuprofen, and the impact seemed as great as with cox-2 inhibitors [116] . in contrast, naproxen was not associated with an increased risk of cardiovascular events, suggesting this was a safer nsaid to use [116] . these data raise the question of whether any nsaid other than naproxen is safe to use in the long term as a 33% increase in cardiovascular disease will outweigh any improvement in quality of life for most patients. furthermore, another systematic review of rcts [117] suggested cox-2 inhibitors were associated with an increased risk of dementia, highlighting there may be other risks to taking these drugs long-term. another approach to gastroprotection is to replace the upper gastrointestinal deficiency in prostaglandin caused by nsaids with a prostaglandin analogue. misoprostol, a synthetic prostaglandin e1 analogue, dramatically reduces endoscopic ulcers in a systematic review of 22 rcts involving almost 6000 patients taking nsaids with an nnt of 10 [85] . there was early promise [65] that this would translate into a reduction in pu complications, but this was not supported by a systematic review of three rcts involving almost 9000 patients, where there was no statistically significant reduction in pu bleeding [85] . the use of misoprostol is also limited by adverse events such as diarrhea with up to 20% patients withdrawing because of adverse events [118] . this can be mitigated by lowering the dose [118] , but it remains a significant problem when used for long-term prophylaxis. prostaglandin analogues therefore cannot be recommended for gastroprotection routinely in patients taking nsaids. there may be selected patients where this might be the appropriate drug. for example, rct data suggest that misoprostol may reduce nsaid-related small bowel ulcers detected by video capsule endoscopy [119] . whether this translates to improvement in clinical outcomes remains to be determined, but this may be an option for patients with predominantly small bowel ulcer problems that cannot discontinue nsaids. the above evidence provides a framework to selecting which patients should receive gastroprotection. in general, these should be reserved for patients over the age of 60 years taking nsaids or those being admitted to icu. even in these groups, the risk is not sufficiently high to warrant gastroprotection to everyone [92â�¢â�¢] . ideally what is required is a validated risk calculator that gives the absolute risk of developing peptic ulcer disease over a given period of time similar to those used to determine risk of cardiovascular disease [120] . patients starting long-term nsaid or low-dose asa therapy should all routinely be screened for h. pylori, and those infected should receive eradication therapy with regimens that follow latest guidelines [79, 121] . this will reduce pu complications but will also have added benefits in reducing future dyspepsia and future gastric cancer risk. as this is a one-off treatment, this is likely to be cost-effective. for those over 60 years of age taking long term nsaids, naproxen should be the drug of choice due to the favorable cardiovascular risk profile. additional risk factors should be ascertained according to the scoring system outlined in table 1 . those that have 6 points or more for naproxen or 8 points for lowdose asa (as the underlying risk of developing pu complication is lower than for naproxen) should be offered long-term ppi therapy with careful discussion of the risks and benefits. for patients admitted to icu, additional risk factor should also be ascertained as determined by a systematic review [122] that identified 8 observational studies involving over 116,000 icu patients. patients with chronic liver disease and/or coagulopathy should be given prophylaxis with ppi therapy during their hospital admission [122] . similarly, those that need mechanical ventilation and are also in shock may benefit from ppi therapy [122] . those that are discharged should have their ppi discontinued if there is no indication for continued therapy [123] . there is a wealth of rct evidence on the benefits of h. pylori eradication and ppi therapy to prevent pu complications in patients taking nsaid. there is also rct evidence on the benefits of ppi therapy in patients taking anticoagulation and for icu patients. it is clear from these trials that these interventions are effective, but high-risk groups need to be identified, and this should be the focus of future research. conflict of interest takeshi kanno declares that he has no conflict of interest. paul moayyedi declares that he has no conflict of interest. papers of particular interest, published recently, have been highlighted as: of major importance trends for incidence of hospitalization and death due to gi complications in the united states from acute upper gi bleeding: did anything change? time trend analysis of incidence and outcome of acute upper gi bleeding between 1993 trends and outcomes of hospitalizations for peptic ulcer disease in the united states systematic review and meta-analysis of proton pump inhibitor therapy in peptic ulcer bleeding epinephrine injection versus epinephrine injection and a second endoscopic method in high-risk bleeding ulcers systematic review: the global incidence and prevalence of peptic ulcer disease systematic review of the epidemiology of complicated peptic ulcer disease: incidence, recurrence, risk factors and mortality burden and cost of gastrointestinal, liver, and pancreatic diseases in the united states: update trends in opioid and nonsteroidal anti-inflammatory use and adverse events trends in ambulatory prescribing of antiplatelet therapy among us ischemic stroke patients trends in anticoagulant prescribing: a review of local policies in english primary care peptic ulcer disease mortality from peptic ulcer bleeding: the impact of comorbidity and the use of drugs that promote bleeding gastrointestinal bleeding with oral anticoagulation: understanding the scope of the problem management of nonvariceal upper gastrointestinal bleeding: guideline recommendations from the international consensus group asia-pacific working group consensus on non-variceal upper gastrointestinal bleeding: an update ppi versus histamine h2 receptor antagonists for prevention of upper gastrointestinal injury associated with low-dose aspirin: systematic review and meta-analysis canadian association of gastroenterology consensus group. canadian consensus guidelines on long-term nonsteroidal antiinflammatory drug therapy and the need for gastroprotection: benefits versus risks the relative efficacies of gastroprotective strategies in chronic users of nonsteroidal anti-inflammatory drugs epidemiology of perforated peptic ulcer: age-and gender-adjusted analysis of incidence and mortality comparison of mortality from peptic ulcer bleed between patients with or without peptic ulcer antecedents peptic ulcer disease and non-steroidal antiinflammatory drugs new insights into the use of currently available non-steroidal anti-inflammatory drugs risk of upper gastrointestinal ulcer bleeding associated with selective cyclo-oxygenase-2 inhibitors, traditional non-aspirin non-steroidal antiinflammatory drugs, aspirin and combinations individual nsaids and upper gastrointestinal complications low doses of acetylsalicylic acid increase risk of gastrointestinal bleeding in a meta-analysis quantitative estimation of rare adverse events which follow a biological progression: a new model applied to chronic nsaid use effects of clopidogrel in addition to aspirin in patients with acute coronary syndromes without st-segment elevation platelets modulate gastric ulcer healing: role of endostatin and vascular endothelial growth factor release gastrointestinal events with clopidogrel: a nationwide population-based cohort study age-related risks of long term oral anticoagulant therapy bleeding risks of antithrombotic therapy population-based cohort study of warfarin-treated patients with atrial fibrillation: incidence of cardiovascular and bleeding outcomes systematic review of long term anticoagulation or antiplatelet treatment in patients with non-rheumatic atrial fibrillation comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta-analysis of randomised trials trends and variation in oral anticoagulant choice in patients with atrial fibrillation gastrointestinal safety of direct oral anticoagulants: a large population-based study risks and benefits of direct oral anticoagulants versus warfarin in a real world setting: cohort study in primary care rivaroxaban with or without aspirin in stable cardiovascular disease risk of bleeding in patients with acute myocardial infarction treated with different combinations of aspirin, clopidogrel, and vitamin k antagonists in denmark: a retrospective analysis of nationwide registry data corticosteroidmechanisms of action in health and disease a surviving myth"-corticosteroids are still considered ulcerogenic by a majority of physicians corticosteroids and risk of gastrointestinal bleeding: a systematic review and meta-analysis increased use of antidepressants in canada effect of antidepressants and psychological therapies in irritable bowel syndrome: an updated systematic review and meta-analysis efficacy and tolerability of selective serotonin reuptake inhibitors compared with tricyclic antidepressants in depression treated in primary care: systematic review and meta-analysis selective serotonin reuptake inhibitors and increased bleeding risk: are we missing something? serotonin reuptake inhibitor antidepressants and abnormal bleeding: a review for clinicians and a reconsideration of mechanisms fluoxetine and sertraline stimulate gastric acid secretion via a vagal pathway in anaesthetised rats association between selective serotonin reuptake inhibitors and upper gastrointestinal bleeding: population based case-control study use of selective serotonin reuptake inhibitors and risk of upper gastrointestinal tract bleeding: a population-based cohort study risk of upper gastrointestinal bleeding with selective serotonin reuptake inhibitors with or without concurrent nonsteroidal anti-inflammatory use: a systematic review and meta-analysis risks of hospitalization for upper gastrointestinal bleeding in users of selective serotonin reuptake inhibitors after helicobacter pylori eradication therapy: a propensity score matching analysis global prevalence of helicobacter pylori infection: systematic review and meta-analysis role of helicobacter pylori infection and non-steroidal anti-inflammatory drugs in peptic-ulcer disease: a meta-analysis helicobacter pylori infection and the risk of upper gastrointestinal bleeding in low dose aspirin users: systematic review and meta-analysis prevalence and outcome of gastrointestinal bleeding and use of acid suppressants in acutely ill adult intensive care patients the attributable mortality and length of intensive care unit stay of clinically important gastrointestinal bleeding in critically ill patients risk assessment after acute upper gastrointestinal haemorrhage a risk score to predict need for treatment for upper-gastrointestinal haemorrhage effect of comorbidity on mortality in patients with peptic ulcer bleeding: systematic review and metaanalysis eradication therapy for peptic ulcer disease in helicobacter pylori-positive people risk for serious gastrointestinal complications related to use of nonsteroidal anti-inflammatory drugs. a meta-analysis misoprostol reduces serious gastrointestinal complications in patients with rheumatoid arthritis receiving nonsteroidal anti-inflammatory drugs bleeding risk and major adverse events in patients with previous ulcer on oral anticoagulation therapy systematic review and metaanalysis: is 1-week proton pump inhibitor-based triple therapy sufficient to heal peptic ulcer? helicobacter pylori eradication therapy vs. antisecretory non-eradication therapy (with or without long-term maintenance antisecretory therapy) for the prevention of recurrent bleeding from peptic ulcer effect of population screening and treatment for helicobacter pylori on dyspepsia and quality of life in the community: a randomised controlled trial impact of helicobacter pylori eradication on dyspepsia, health resource use, and quality of life in the bristol helicobacter project: randomised controlled trial helicobacter pylori eradication as a strategy for preventing gastric cancer second asian-pacific consensus guidelines for helicobacter pylori eradication helicobacter pylori eradication for the prevention of gastric neoplasia helicobacter pylori eradication therapy to prevent gastric cancer: systematic review and meta-analysis the global, regional, and national burden of stomach cancer in 195 countries, 1990-2017: a systematic analysis for the global burden of disease study a community screening program for helicobacter pylori saves money: 10-year follow-up of a randomised controlled trial the cost-effectiveness of population helicobacter pylori screening and treatment: a markov model using economic data from a randomised controlled trial clinical trial: prolonged beneficial effect of helicobacter pylori eradication on dyspepsia consultations -the bristol helicobacter project acg clinical guideline: treatment of helicobacter pylori infection effects of helicobacter pylori infection on long-term risk of peptic ulcer bleeding in low-dose aspirin users meta-analysis: role of helicobacter pylori eradication in the prevention of peptic ulcer in nsaid users systematic reviews of the clinical effectiveness and costeffectiveness of proton pump inhibitors in acute upper gastrointestinal bleeding guidelines for prevention of nsaid-related ulcer complications gastric mucosal defense and cytoprotection: bench to bedside effects of gastroprotectant drugs for the prevention and treatment of peptic ulcer disease and its complications: meta-analysis of randomised trials prevention of nsaid-induced gastroduodenal ulcers proton-pump inhibitors for the prevention of upper gastrointestinal bleeding in adults receiving antithrombotic therapy clopidogrel-based antithrombotic therapy for cardiovascular prevention: a systematic review and meta-analysis of randomized controlled trials safety of proton pump inhibitors based on a large, multi-year, randomized trial of patients receiving rivaroxaban or aspirin this trial suggested that ppi therapy reduced the risk of peptic ulcer and peptic ulcer bleeding, but the event rate was too low for this to be cost-effective in all patients taking anticoagulation proton pump inhibitors versus histamine 2 recpetor antagonists for stress ulcer prophylaxis in critically ill patients: a systematic review and metaanalysis efficacy and safety of proton pump inhibitors for stress ulcer prophylaxis in critically ill patients: a systematic review and meta-analysis of randomized trials efficacy and safety of gastrointestinal bleeding prophylaxis in critically ill patients: systematic review and network meta-analysis comprehensive network meta-analysis of all treatment options to prevent stress ulcer bleeding in intensive care patients. acid suppressive therapy was effective but probably not beneficial in low risk patients medical treatments in the short term management of reflux oesophagitis complications of proton pump inhibitor therapy risk of community-acquired pneumonia and use of gastric acid suppressive drugs long-term proton pump inhibitor therapy and risk of hip fracture omeprazole as a risk factor for campylobacter gastroenteritis: casecontrol study proton pump inhibitor use and risk of adverse cardiovascular events in aspirin treated patient with first time myocardial infarction: a nationwide propensity score matched analysis proton pump inhibitor use and risk of chronic kidney disease association of proton pump inhibitors with risk of dementia: a pharmacoepidemiological claims data analysis long-term proton pump inhibitors and risk of gastric cancer development after treatment for helicobacter pylori: a population-based study risk of death among users of proton pump inhibitors: a longitudinal observational cohort study of united states veterans use of proton pump inhibitors increases risk of incident kidney stones leaving no stone unturned in the search for adverse events associated with the use of proton pump inhibitors increased risk of covid-19 among users of proton pump inhibitors the risks of ppi therapy safety of proton pump inhibitors based on a large, multi-year, randomized trial of patients receiving rivaroxaban or aspirin this trial evaluated the safety of ppis with over 53,000 patients years of follow-up. there was no evidence of harm of ppis apart from risk of enteric infections and the data suggested that the various harms of ppi described in observational studies are likely to be overestimated long-term safety of proton pump inhibitor therapy assessed under controlled, randomised clinical trial conditions: data from the sopran and lotus studies esomeprazole and aspirin in barrett's oesophagus (aspect): a randomised factorial trial the first randomized trial evaluating ppi and aspirin to prevent progression of barrett's esophagus to neoplasia. if this trial started today there would be objection to such high doses of ppi being used for an average of 9 years, but mortality was reduced in the twice daily ppi group. the gi bleeding rate was very low in this trial giving nsaid induced gastrointestinal damage and designing gisparing nsaids efficacy, tolerability, and upper gastrointestinal safety of celecoxib for treatment of osteoarthritis and rheumatoid arthritis: systematic review of randomised controlled trials gastrointestinal safety of cyclooxygenase-2 inhibitors: a cochrane collaboration systematic review cardiovascular events associated with rofecoxib in a colorectal adenoma chemoprevention trial cardiovascular risk associated with celecoxib in a clinical trial for colorectal adenoma prevention cnt) collaboration. vascular and upper gastrointestinal effects of non-steroidal anti-inflammatory drugs: metaanalyses of individual participant data from randomised trials aspirin and other nonsteroidal anti-inflammatory drugs for the prevention of dementia misoprostol dosage in the prevention of nonsteroidal anti-inflammatory drug-induced gastric and duodenal ulcers: a comparison of three regimens misoprostol for small bowel ulcers in patients with obscure bleeding taking aspirin and non-steroidal anti-inflammatory drugs (masters): a randomised, double-blind, placebo-controlled, phase 3 trial framinghambased tools to calculate the global risk of coronary heart disease the toronto helicobacter pylori consensus in context reply predictors of gastrointestinal bleeding in adult icu patients: a systematic review and meta-analysis prevalence and predictors of inappropriate prescribing according to the screening tool of older people's prescriptions and screening tool to alert to right treatment version 2 criteria in older patients discharged from geriatric and internal medicine wards: a prospective observational multicenter study publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord-010884-g4gesvzt authors: heitzer, andrew m.; piercy, jamie c.; peters, brittany n.; mattes, allyssa m.; klarr, judith m.; batton, beau; ofen, noa; raz, sarah title: cumulative antenatal risk and kindergarten readiness in preterm-born preschoolers date: 2019-08-16 journal: j abnorm child psychol doi: 10.1007/s10802-019-00577-8 sha: doc_id: 10884 cord_uid: g4gesvzt a suboptimal intrauterine environment is thought to increase the probability of deviation from the typical neurodevelopmental trajectory, potentially contributing to the etiology of learning disorders. yet the cumulative influence of individual antenatal risk factors on emergent learning skills has not been sufficiently examined. we sought to determine whether antenatal complications, in aggregate, are a source of variability in preschoolers’ kindergarten readiness, and whether specific classes of antenatal risk play a prominent role. we recruited 160 preschoolers (85 girls; ages 3–4 years), born ≤33(6)/(7) weeks’ gestation, and reviewed their hospitalization records. kindergarten readiness skills were assessed with standardized intellectual, oral-language, prewriting, and prenumeracy tasks. cumulative antenatal risk was operationalized as the sum of complications identified out of nine common risks. these were also grouped into four classes in follow-up analyses: complications associated with intra-amniotic infection, placental insufficiency, endocrine dysfunction, and uteroplacental bleeding. linear mixed model analyses, adjusting for sociodemographic and medical background characteristics (socioeconomic status, sex, gestational age, and sum of perinatal complications) revealed an inverse relationship between the sum of antenatal complications and performance in three domains: intelligence, language, and prenumeracy (p = 0.003, 0.002, 0.005, respectively). each of the four classes of antenatal risk accounted for little variance, yet together they explained 10.5%, 9.8%, and 8.4% of the variance in the cognitive, literacy, and numeracy readiness domains, respectively. we conclude that an increase in the co-occurrence of antenatal complications is moderately linked to poorer kindergarten readiness skills even after statistical adjustment for perinatal risk. electronic supplementary material: the online version of this article (10.1007/s10802-019-00577-8) contains supplementary material, which is available to authorized users. born preschoolers and their term-born peers yielded poorer scores in the former group, regardless of gestational age. group differences have been documented in expressive and receptive language abilities as well as in visuomotor or graphomotor (preprinting) skills in both very preterm (foster-cohen et al. 2010; caravale et al. 2005; torrioli et al. 2000) and late-preterm (baron et al. 2009 ; baron et al. 2011) cohorts. establishing the nature of early biomedical risk factors that forecast deficits in critical academic precursor skills is essential for identification of preschoolers at risk for deviation from the typical neurodevelopmental trajectory. findings from a recent quantitative integration of the literature suggest that both preterm birth and adverse antenatal factors are important antecedents of intellectual disability diagnosed between 1 and 17 years of age (huang et al. 2016 ). yet few preschool outcome studies included within-group examination of the links between complications associated with preterm-birth and performance on neuropsychological tasks that tap domain-specific, literacy or numeracy, precursor skills. fostercohen et al. (2010) , focusing on the impact of peri-, and neonatal, but not antenatal, complications, reported no significant associations between several early risk factors (including the sum of perinatal complications) and language delay within a cohort of four-year-old preschoolers born very preterm. in a similar sample of preschoolers born <34 weeks gestation torrioli et al. (2000) were able to document a significant relationship between a single major antenatal complication, intrauterine growth restriction, and intelligence (but not visual-motor integration). consistent with the prevailing fetal programming framework (barker 1998) , conditions or risks originating in-utero have the unique capacity to modify long-term physical health and behavioral outcome. though the exact mechanisms leading to disease or cognitive-behavioral deficits have yet to be specified, it has been suggested that fetal adaptation to environmental stress may involve vascular, metabolic or endocrine changes that permanently alter bodily structure or function (hocher et al. 2001) . antenatal perturbations are likely transmitted to the infant through their effects on placental function. the latter, in turn, adversely influences fetal and postnatal brain development and cognitive-behavioral outcome (zeltser and leibel 2011; buss et al. 2012; davis and sandman 2012) . within the high-risk group of preterm-born children, both the variability in the base-rates of various antenatal complications associated with prematurity and the sheer number of medical risk factors that require consideration often impede exploration of developmental outcome effects of early biological adversity. additionally, the developmental impact of discrete medical complications is probably cumulative (shalev et al. 2014) . cumulative risk indices may show increased stability across developmental periods, accounting for more outcome variability between individuals than specific complications (wade et al. 2015) . aggregate scores that reflect cumulative risk associated with a distinct early risk epoch may be increasingly sensitive compared to discrete complications that are likely linked to small effects that are difficult to detect (whitehouse et al. 2014) . although the influence of cumulative perinatal risk on developmental outcome of preterm children has received some consideration (e.g., carmody et al. 2006; foster-cohen et al. 2010) , the effects of cumulative antenatal risk in this vulnerable group remain essentially unexplored. hence, our chief objective was to examine the combined contribution of common antenatal complications to explaining individual differences in academic (kindergarten) readiness within a preterm-born cohort. in addition to gauging intellectual abilities, we evaluated language skills as precursors of reading attainment, visual-motor integration skills as antecedents of writing-skill development, and early number concepts as the preschool forerunners of math achievement. we predicted that degree of antenatal risk in preterm-born preschoolers will be inversely related to both general cognitive, and domain-specific, neuropsychological skills that provide the basis for scholastic achievement. preterm-born children (< 34 weeks' gestation) were recruited for the study between 3 and 4 years of age and evaluated between may 2011-july 2016. the children were born at william beaumont hospital (wbh), royal oak, mi, in 2007 mi, in -2012 , and treated in the neonatal intensive care unit (nicu). at wbh-nicu resuscitation is attempted for all infants with an estimated gestational age ≥ 23 0 / 7 weeks (batton et al. 2011) . children with congenital anomalies, or who required mechanical ventilation after discharge, were excluded from the retrospectively identified nicu cohort matching our inclusion criteria. the families of 41.63% of 860 cases could still be reached for recruitment attempt based on contact information provided at the time of birth (see online resource 1). of these, families of 38.27% cases were not interested, for multiple reasons. in accord with wbh human investigation committee guidelines nonparticipating families were not specifically queried, yet amongst common reasons spontaneously provided by families for nonparticipation were being "too busy", residing too far away or not wanting to travel from suburban areas to the city. families of 51 additional cases (14.24% of those successfully contacted) "no-showed" for the schedule assessment twice and were not rescheduled. the 170 available participants constituted 51.20% of the pool of cases whose families could be contacted between 3 and 4 years of age. of 170 evaluated cases ten were excluded from data analyses, five with suspected antenatal substance exposure and five with missing background medical information pertinent to this investigation. altogether 160 cases were available for this study (see table 1 for sociodemographic characteristics). as the table shows, our sample included participants from a very broad socioeconomic range. nonetheless, the relatively high mean ses (48.6 of 66 possible points; sd = 10.00) reflects the composition of the catchment area of wbh. this region includes primarily middle social strata, thereby minimizing the possibility of confounding the effects of antenatal risk with other adverse environmental factors associated with socioeconomic status. correspondingly, about 85% of nicu admissions were covered by private medical insurance and only 15% by medicaid. additionally, 78.1% of the mothers and 62.3% of the fathers attained a college degree. the final sample included 75 (46.9%) boys and 28 (17.5%) participants of african-american descent. the proportion of males to females in our sample (46.9%: 53.1%) was not significantly different from the proportion observed in the remaining portion of the total relevant nicu cohort from which we attempted to recruit (53.14%: 46.86%; χ 2 [1] = 2.05, p = 0.15). the proportion of singletons to multiples in our sample (56.3%: 43.8%) was also not significantly different from the proportion observed in the remainder of the total cohort (60.71%: 39.29%; χ 2 [1] = 1.08, p = 0.30). information about racial distribution was available for two thirds of the total relevant cohort. as noted above, our sample included 17.5% african americans, a somewhat smaller proportion than that observed in the remainder of the relevant nicu cohort (24.16%;χ 2 [1] = 3.01, p = .08). the mean gestational age (30.4 ± 2.6 weeks) and birth weight (1,424 ± 457 g) in our sample approximated the mean gestational age (29.8 ± 2.8 weeks) and birth weight (1,434.3 ± 513 g) available for the total relevant cohort. likewise, the length of stay was also similar for our sample (43.42 ± 34.17 days) and the total cohort (42.9 ± 34.5 days), and the proportion of children requiring ventilation in our sample was not statistically different from that observed in the remaining portion of the total cohort (36.5% vs. 38.57%; χ 2 [1] = 0.29, p = 0.585). according to parental report, none of the children had sustained a severe head injury with loss of consciousness. postnatal seizure history was reported for 11 cases, yet only three required anticonvulsant medications. whereas the intellectual abilities of two of these three children fell well within the average range, the remaining case manifested severe intellectual deficits. as table 1 shows, ninety participants were singletons and seventy were products of multiple pregnancy. sixty-four multiples were co-members of twinships or triplets and therefore shared antenatal risk. descriptive statistics regarding pregnancy and perinatal risk in our sample were based on data obtained retrospectively from hospital records ( table 2 ). additional information regarding intervention procedures is provided in online resource 2. gestational age in our sample ranged from 23 4 / 7 -33 6 / 7 weeks and was determined by maternal dates and confirmed by early prenatal ultrasound in >95% of cases. three children with cp diagnosis (spastic diplegia) were included in the sample. additionally, there were three cases with perinatal diagnoses of grade iii intraventricular hemorrhage, one with grade iv, one with periventricular leukomalacia (pvl), one with both pvl and subsequent cerebral palsy (spastic diplegia), and one with grade iv hemorrhage coupled with hydrocephalus (requiring reservoir placement). adding the aforementioned single seizure case with severe intellectual impairment, the "significant brain injury" subgroup included 11 cases. importantly, statistical analyses were performed both with and without these cases. general assessment considerations evaluations were conducted between 2011 and 2016 by clinical psychology graduate students who had been extensively trained in developmental neuropsychological assessment. they were kept uninformed about participants' medical background data, with the single exception of being aware that the children were nicu graduates. all testing and other data collection procedures were conducted in compliance with ethical standards of the helsinki 1964 declaration and its later amendments, as well kramer et al. (2001) i summary score for the nine above listed antenatal complications with sample frequency > 5% j includes any atypical presentations (breech, transverse lie, footling, etc.) k arterial cord ph examined for 130 participants, arterial ph below 7.1 was recorded (n = 3). when only venous cord ph was available (n = 21), venous ph below 7.2 was recorded (n = 1). for four cases, initial capillary blood below 7.2 was recorded (n = 1), whereas acid-base information was unavailable for five cases l as determined by obstetrician; > 95% of cases were corroborated by antenatal ultrasound. distribution: 33 cases ≤28 0 / 7 weeks (20.62%) + 31 cases ≤30 0 / 7 weeks (19.37%) + 62 cases <32 6 / 7 weeks (38.75%), + 34 cases ≤33 6 / 7 weeks (21.25%) m initial hematocrit <40% n established by positive blood culture o chronic lung disease: supplemental oxygen required at 36 weeks gestation or discharge for infants <32 weeks gestation. no cases were observed in this sample with gestational age ≥ 32 weeks p based on a chest roentgenogram and clinical evaluation q peak bilirubin ≥12 mg/dl r diagnosed at least once during nicu stay s documented on the basis of cranial ultrasound. (mild = bleed grade 1&2; severe = grade 3&4 using grading criteria by volpe, 2001) . routine cranial ultrasound given to all infants with gestational age ≤ 32 weeks, and when clinically indicated to infants with gestational age > 32 weeks. there were twenty cases (12.5%) with mild brain bleeds (sixteen grade i and four grade ii) and seven (4.4%) with severe intracranial pathology, including three grade iii intraventricular hemorrhage, two grade iv (one requiring shunt), and two cases with periventricular leukomalacia t infants discharged on the ventilator were not included in the current study u diagnosed by clinical manifestations and echocardiographic information v summary score for the nine above listed perinatal complications with sample frequency > 5% w the rate of severe retinopathy of prematurity (> stage 2) in the total sample was 4.11%, below our inclusion cutoff (one cases with stage iii, four cases with stage iii+, and two cases with stage iv) and there were no cases with grade iv rop after exclusion of eleven cases with significant neurological background. all cases with stage iii+ and stage iv had received laser treatment (wechsler 2002) . prereading skills language skills were assessed using the core language (cl) score from the clinical evaluation of language fundamentals-preschool (celf-p2; wiig et al. 2004 ). the cl stability coefficient for ages 3:0-3:11 is high (r 12 = 0.87), whereas internal consistency and split-half reliability are excellent (r xx = 0.91 and 0.92, respectively). the cl scale is comprised of three subtests: sentence structure, word structure, and expressive vocabulary. prenumeracy skills quantitative knowledge and reasoning were measured using the applied problems (ap) subtest from the woodcock johnson (wj-iii) tests of achievement (woodcock et al. 2001) . at the prekindergarten level, ap assesses emergent counting, addition, and subtraction skills. the split-half reliability for ages 3 and 4 is excellent (r 11 = 0.92 and 0.94, respectively), and test-retest correlations range from r 12 = 0.85-.90 for ages 2-7 (woodcock et al. 2001) . quantitative concepts (qc), another wj-iii task that may be administered at three years of age does not possess an adequate floor (alfonso and flanagan 2002) , and was therefore not included. preprinting skills eye-hand coordination skills were assessed using the visual-motor integration (vmi) subtest from the peabody developmental motor scales (pdms-2; folio and fewell 2000) . the vmi includes items that require reaching and grasping, building with blocks, or copying designs. internal consistency reliability for 36-47 months of age is excellent (r 11 = 0.94), whereas the overall test-retest (r 12 = 0.92) and inter-scorer (r 11 = 0.98) reliability coefficients are also exceptionally high (folio and fewell 2000) . descriptive information for performance measures the mean fsiq, cl, and vmi scores (±sd) of our participants fell well within the average range on three of the four readiness measures (107.83 ± 17.62, 106.43 ± 13.60, and 10.73 ± 2.91, respectively), whereas mean ap score fell above average (116.71 ± 13.55). these favorable results are likely attributable to the preponderantly middle-class background of our sample. notably, participants' scores spanned a broad range (with similar ranges and sd's for ap and cl) and it was our goal to explore the contribution of antenatal risk to explaining this variability. all standard scores were corrected for prematurity. the recalculation of the preterm preschooler's age-at-testing as time elapsed since the expected date of delivery allows one to derive standard scores based on age-reference norms of typical children who are similar in biological maturity. additional descriptive data for the total-, and subsample without significant brain injury may be found in online resource 3. cumulative antenatal risk consistent with the fetal programming hypothesis, our chief variable of interest was cumulative antenatal risk, operationalized as an index comprised of equally weighted complications. we included nine major complications with sample frequency > 5% (see table 2 ) and with well documented relationships to unfavorable neonatal outcome or neurobehavioral sequelae. hence, for each participant we summed the identified complications, out of the nine relatively common antenatal risks, with the following sample distribution: 0 (15%), 1 (40.62%), 2 (27.50%), 3 (13.75%) and 4 (3.10%). no cases were observed with frequency ≥ 5 complications. the correlations between pairs of antenatal complications were moderate at best, suggesting that the information provided by discrete complications was not redundant. the strongest relationship was observed between diagnosis of histological chorioamnionitis and pprom latency duration (time between membrane rupture and birth) > 12 h (r = 0.489; p < 0.001). cumulative perinatal risk a cumulative perinatal risk score was also computed and was included as a covariate. nine major perinatal complications with sample frequency > 5% provided the basis for the perinatal summary score (see table 2 ). abnormal presentation was not included because this complication is thought to exert little influence on longterm outcome (eide et al. 2005) and since the potentially unfavorable outcome effects are believed to be the result of confounding with the effects of prematurity or gestational age (ismail et al. 1999 ). the latter variable was taken into account as a separate risk factor in this study. background medical information was obtained retrospectively from the mother's labor and delivery hospitalization as well as the nicu records. intra-class correlations (icc-2; shrout and fleiss 1979) for the antenatal and the perinatal complications composite scores were excellent, owing in part to the ease and efficiency of searching electronic records. based on two independent and trained medical records reviewers, icc equaled or approached unity for ten cases (icc [9] = 1.00, and 0.98, for antenatal and perinatal composites, respectively). construct validity of the two summary scores was established in the singletons subsample (n = 90) by examination of the associations between the cumulative antenatal risk score and birth weight, an index of fetal growth and well-being, and between the cumulative perinatal risk score and both length of nicu stay and days on the ventilator, two indexes of need for medical intervention. in addition to the 90 singletons and seven single-members of twin pairs, our sample included 27 sets of co-twins and three sets of co-triplets. to capture the correlation between participants within sets of multiples, we used spss 24.0 mixed (maximum likelihood) to fit separate linear mixed effects models for each outcome variable, with multiplicity (cases from the same birth mother) as a random effect. thus, the individual multiples nested within each set were conceptualized as replications and were given a random block number that was unique, but identical for set members. in contrast, singleton children or multiples without an evaluated co-multiple had no replicates and were considered random block with size 1. this model enabled both co-multiples and singletons to be used in the same analysis without either violating independence assumptions or, alternatively, discarding important information by including only a single member of each set. the variable of interest, antenatal risk (the sum of antenatal complications), was entered together with sociodemographic covariates, sex and ses (hollingshead 1975 ), into the model. the hollingshead index is a composite score reflecting family social status. the score is based on four factors: parental marital status, employment status, educational attainment and occupational prestige. two additional covariates were entered to capture the influence of perinatal risk: gestational age and the perinatal complications summary score (see table 2 ). birth weight was highly correlated with gestational age and was therefore not included (r = 0.82; p < 0.001). hence, the effect of antenatal risk on readiness was statistically adjusted for influence of four covariates, with all five predictors entered simultaneously into the model. information about bivariate correlations among the five predictors may be found in online resource 4. one should note here that a fifth covariate, iq test edition, was used in analyses of cognitive outcome. the predicted variables were the cognitive (prorated fsiq) and pre-academic (celf-p2 cl score, wj-iii ap subtest score, and pdms-2 vmi score) measures of kindergarten readiness corrected for degree of prematurity. information about the correlations between the five predictors and four outcome measures is presented in online resource 5. three (1.9%) of the 160 participants were unable to complete any of the tasks included in this evaluation (two cases with cp and a single case without significant neurological findings who required >4.5 months on the respirator at birth). of the 148 children without significant brain injury who completed at least one task, a minority of cases failed to complete the tasks needed to obtain a score on one or more of the four preacademic performance indices (n = 1 [0.6%], 8 [5.4%], 14 [9.4%], and 6 [4.0%] cases for fsiq, cl, ap, and vmi, respectively). due to the children's young age, it was difficult, if not impossible, to ascertain the causes of failure to attempt or follow instructions for specific subtests. examination of the correlates of the sum of incomplete subtests per participant r e v e a l e d s i g n i f i c a n t a s s o c i a t i o n s n e i t h e r w i t h sociodemographic variables (r[158] = −.102 and − .100 for ses and sex; p = .199 and .207, respectively) nor with preschool attendance (r[155] = 0.084; p = 0.296). in contrast, a significant inverse relationship between the number of incomplete subtests and the fsiq was observed (r[156] = −.48, p < 0.001). to avoid bias resulting from potential influence of the missing subtest scores, we applied multiple imputation to replace these performance data. the results of data analyses are reported both before and after exclusion of cases with significant brain injury. prior to data analyses, interactions between sex and the remaining predictors were examined for all outcome measures. as none of the interactions were significant (all p values > .15), the reduced models were used. follow-up analyses were conducted to explore whether any observed associations between antenatal risk and kindergarten readiness were attributable to a particular class of complications. hence we grouped the nine antenatal complications comprising the summary score (table 2 ) into four categories. these four groupings were based on shared etiological pathways, or shared antepartum symptoms coupled with at least some shared antecedents. specifically, complications associated with either (1). intra-amniotic infection and inflammation (histologic chorioamnionitis and membranes ruptured >12 h; e.g., stock et al. 2015); (2). placental insufficiency (hypertension in pregnancy, hellp syndrome, and iugr; e.g., stepan et al. 2005) ; (3). maternal endocrine dysfunction (hypothyroid and diabetes; e.g., haddow et al. 2016) or (4). uteroplacental bleeding (abruption and previa; e.g., berhan 2014; getahun et al. 2006) . category scores were then assigned to participants based on the sum of complications accumulated within each class of antenatal risk. the data were then reanalyzed using the same sociodemographic and medical variables as covariates; however, we substituted the four sums of risk scores for the single composite sum of nine antenatal complications. table 3 shows analyses of performance for the total sample and for the subsample without significant brain injury cases. as shown, the antenatal complications score was inversely related to the fsiq, both before (p = 0.002) and after (p = 0.003) exclusion of brain injury cases. analyses of preacademic performance revealed similar associations between antenatal risk and both cl and ap scores, before (p = 0.001 and p = 0.001, respectively) and after (p = .002 and p = 0.005, respectively) excluding brain injury cases. no significant relationships were observed between cumulative antenatal risk and the vmi, yet the vmi was linked to gestational age (p = 0.042 and 0.012). added variable plots, depicting relationships between residualized antenatal risk scores and outcome of the remaining 157 cases, two failed to obtain a score on the fsiq (one with brain injury), nine on cl (3 with brain injury), seventeen on ap (three with brain injury), six on the vmi. f of the 148 "non-neurological" cases completing tasks required for a score on at least one of the four outcome measures, one (.06%) could not obtain a score on the fsiq, six (4.05%) on cl, fourteen (9.45%) on ap, six (4.05%) on the vmi. g computation of δr 2 based on snijders and bosker (1999, pp. 102-103) . h antenatal complications score is a composite of presence (=1) vs. absence (=0) of nine complications: placental abruption, placenta previa, chorioamnionitis, diabetes, hypertension in pregnancy, hellp syndrome (hemolysis, elevated liver enzymes, low platelet count), hypothyroidism, preterm premature rupture of the membranes (pprom) > 12 h, and intrauterine growth restriction. i due to the paucity of cases with four antenatal complications, analyses were repeated with three and four complications combined into a single category ( table 4 shows the relative contribution of each of the four categories of antenatal risk to the four performance measures; the proportion of variance explained (δr 2 ) is provided for statistical effects with p < 0.10. as the table reveals, intraamniotic infection risk was significantly related to the fsiq (p = 0.016), cl (p =0. 043), and ap (p =0. 012), whereas conditions associated with placental insufficiency were significantly related to cl (p = 0.047), with trends for associations with the fsiq and ap (p = 0.071 and 0.064, respectively). maternal endocrine dysfunction was associated with cl (p = 0.040), and disorders associated with uteroplacental bleeding were significantly related to the fsiq (p = 0.048). following statistical adjustment for sociodemographic and perinatal confounds, the sum of nine, relatively common, antenatal complications remained a significant source of variability in preterm-born preschoolers' cognitive and academic performance. exploration of the relative outcome contribution of four classes of antenatal risk revealed that complications associated with intra-amniotic infection, placental insufficiency and uteroplacental bleeding accounted for 4.8%, 2.3% and 3.4% of iq variance, respectively, altogether 10.5% of variability in kindergarten cognitive readiness. similarly, complications associated with maternal endocrine dysfunction, intraamniotic infection, and placental insufficiency accounted for 3.8%, 3.3% and 2.7%, of the variance in language skills, respectively, altogether 9.8% of variability in literacy readiness. complications associated with intra-amniotic infection risk and placental insufficiency contributed 5.9% and 2.5% of the variance in early number concepts, respectively, a combined share of 8.4% to variability in numeracy readiness. hence, when considered separately, each of the four risk categories accounts for a small slice of outcome variance in one or more preacademic domains (except visual-motor integration). yet in aggregate they accounted for 8.4-10.5% of the variance in kindergarten readiness of preschoolers free of major handicaps, consistent with effect sizes of moderate magnitude (cohen 1992) . interestingly, amongst the four categories of antenatal risk examined here, intra-amniotic infection was the most consistent contributor to kindergarten readiness. these findings are compatible with recent evidence that inflammation, including chorioamnionitis, contributes to preterm cns injury and is also an independent risk factor for brain injury in the term infant (yellowhair et al. 2018) . these results are also consistent with reports of marked increase in the probability of unexplained cerebral palsy in the presence of antenatal inflammation-infection (horvath et al. 2012) . in this context, one should note that a sizeable proportion of the mothers in our sample received antibiotics prophylaxis for prevention of early onset neonatal infection (see online resource 2). in a recent integrative review of the literature, braye et al. (2018) highlighted the observed decrease in incidence of early onset infection since the introduction of intrapartum antibiotic prophylaxis, as well as findings of randomized controlled studies documenting effectiveness. at the same time, however, braye and colleagues emphasized that the longer-term health implications of prophylaxis for early onset infection are unknown. it is difficult, therefore, if not impossible, to draw conclusions regarding the potential implications of antibiotic prophylaxis on kindergarten readiness until such data become available. nonetheless, our findings revealed that each of the four classes of antenatal risk studied here contributed significantly to explaining performance variance on one or more kindergarten readiness domains. the putative influence of each antenatal risk category on brain maturation trajectory and cognitivebehavioral development may be conceptualized within the broad framework of fetal programming of disease (buss et al. 2012; zeltser and leibel 2011; myatt 2006; andersen et al. 2014; miller et al. 2016; godfrey 2002) . however, the biological mechanisms mediating the relationship between cumulative antenatal risk and kindergarten readiness require specification. antenatal stressors lead to placental adaptive responses to the variations in the maternal-fetal environment (myatt 2006) . these responses, in turn, are followed by fetal adaptations expressed via vascular, metabolic or endocrine changes that permanently modify bodily structure or function (hocher et al. 2001 ). the precise nature and sequence of the biological changes mediating deficits in cognitive-behavioral functioning have yet to be elucidated. to accomplish this goal, future investigations should incorporate measures of potential sequential mediators, including indexes of placental size or function, intrauterine cerebral development, and postnatal brain structure or function. our findings are consistent with zeltser and leibel's (2011) observations that dissimilar intrauterine stress factors may nonetheless lead to similar fetal outcomes because they activate related mechanisms of placental adaptation (fowden et al. 2009 ) which, in turn, shape the trajectory of fetal brain development (buss et al. 2012) . the thesis that diverse insults converge on similar unfavorable outcomes (a relationship mediated by fetal brain programming) is compatible with the notion that the amalgamation of antenatal complications into a single index or several classes of risk may offer an important tool for the study of individual differences among pretermborn children in the severity of neurocognitive deficits. antenatal complications contributed less than ses to explaining outcome variance, although effect magnitude was typically moderate for both variables. girls outperformed boys on all measures, consistent with other reports of female outcome advantage following preterm birth (lauterbach et al. 2001 ). similar to fostercohen et al. (2010) , we did not find significant associations between the sum of perinatal complications and kindergarten readiness in this young age group. however, gestational age, often considered a proxy for perinatal risk, was found to be linked to development of visualmotor integration skills even after we excluded cases with evidence of significant brain injury from the analyses. additionally, there was a weak trend (p < 0.15; table 3 ) for a relationship between gestational age and global intellectual skills. the absence of the oft-reported significant relationships between the degree of prematurity and the remaining outcome measures (e.g., heuvelman et al. 2018 ; a recent epidemiological investigation) was somewhat unexpected. it is possible that within a restricted gestational age range, where values ≥34 weeks were truncated, the relationships between gestational age and preschool performance are more difficult to demonstrate whereas the adverse influence of other risk scales-2. f perinatal complications score is a composite score reflecting presence (=1) vs. absence (=0) of nine complications: anemia, bronchopulmonary dysplasia, bacterial infection, hyaline membrane disease, hyperbilirubinemia, hypoglycemia, intracranial pathology, patent ductus arteriosus, and supplemental oxygen requirement following discharge. g intra-amniotic infection risk score is a composite of two complications believed to share etiological pathways: preterm premature rupture of membranes (pprom) > 12 h and histological chorioamnionitis. h placental insufficiency score is a composite of three complications thought to share etiological pathways: maternal hypertension, hellp syndrome (hemolysis, elevated liver enzymes, low platelet count), and intrauterine growth restriction. i maternal endocrine dysfunction score is a composite of two complications found to share some etiological pathways: maternal diabetes and hypothyroidism. j uteroplacental bleeding risk score is a composite of two complications sharing antenatal symptomatology: placental abruption and placenta previa factors (e.g., antenatal complications) becomes more apparent. interestingly, the absence of gestational age effects on several performance measures employed here seems consistent with findings from a recent meta-analysis of the long-term cognitive and academic performance of children born with various degrees of prematurity (allotey et al. 2018 ). the quantitative integration revealed a miniscule, if any, effect of lower gestational age. children born between 28 and 34 weeks performed almost as poorly as those born <28 weeks. as noted earlier, our findings of an association between cumulative antenatal risk and kindergarten readiness are consistent with the fetal programming hypothesis. researchers of fetal programming have typically studied the role of the prenatal environment without taking into consideration the perinatal, or postnatal environment (grant et al. 2015 ). in the current study, however, we statistically adjusted cumulative antenatal risk for perinatal complications and for socioeconomic status in effort to account for confounding influences occurring after birth. limitations of this study include the retrospective nature of the participant identification and recruitment component. medical data were also collected retrospectively, yet interrater reliability for information obtained from medical records was excellent. the difficulty of a small number of participants to complete various tasks is not surprising, given the combination of young age and increased biomedical risk in our sample. the percentage of missing outcome data for the 148 children without significant neurological injury completing ≥ one task ranged from negligible (0.06%) for cognitive readiness (fsiq) to moderate (9.45%) for numerical readiness (ap). the amount of missing data for three of the four outcome measures was nonetheless small (< 5.0%). although our sample included children with a wide range of socioeconomic circumstances, the mean ses and educational characteristics of the sample revealed a greater representation of middle-class strata, thereby reducing the potentially confounding effects of socioeconomic adversity on kindergarten readiness. nonetheless, because to some extent generalizability to lower strata was traded-off for improved internal validity, our findings may have underestimated statistical effects in samples with greater representation of the lower end of the socioeconomic scale. the cross-sectional nature of this investigation precluded examination of the generalizability of the findings to elementary school readiness and beyond. in the current investigation antenatal risk accounted for up to 10.5% of the variance in kindergarten readiness. antenatal risk estimation was based on a simple frequency count of common complications. a more sensitive measure of risk may be developed to take the severity of discrete complications within each of the four antenatal risk categories examined here into account. increased sensitivity, in turn, may serve to enhance the magnitude of the statistical effects observed here between antenatal risk and emergent academic skills. as noted above, we statistically adjusted for degree of gestational maturity and for the presence of nine common perinatal risk factors (table 2 ) in order to estimate the unique portion of outcome variance that is attributable to cumulative antenatal risk. we further examined the data with and without 'neurological' cases, based on both ultrasound evidence obtained in the nicu and subsequent evidence of cerebral palsy. nonetheless, one may argue that the full effect of confounding perinatal risk factors such as chronic lung disease or germinal matrix hemorrhage may become evident in the longer term only, when academic performance demands are increased. a longitudinal study of educational attainment is best suited to address this issue. the significance of early academic skills as the foundation of scholastic achievement has been established in multiple follow-up investigations of typical kindergartners to the early school years (e.g., pagani et al. 2010; romano et al. 2010; grissmer et al. 2010) . moreover, developmental continuity has been demonstrated between preschool quantitative and oral-language skills and elementary school attainment in math and reading, in a variety of student populations (e.g., davison et al. 2011; manfra et al. 2014; manfra et al. 2017; nguyen et al. 2016) . the existence of robust developmental continuity between pre-academic and academic skill levels supports the notion that the same factors that explain variability in emergent academic skills also account for variability in mastery of both the reading process and basic arithmetic in the early elementary-school years. based on our findings, it is therefore likely that differences in scholastic achievement among graduates of the nicu are partly explained by cumulative antenatal risk. a follow-up study of preterm-born preschoolers, extending to early school-age and beyond, will be needed to support this thesis. development of preschool and academic skills in children born very preterm comparative features of comprehensive achievement batteries cognitive, motor, behavioural and academic performances of children born preterm: a meta-analysis and systematic review involving 64,061 children psychiatric disease in late adolescence and young adulthood. foetal programming by maternal hypothyroidism maternal thyroid function in early pregnancy and child neurodevelopmental disorders: a danish nationwide case-cohort study in utero programming of chronic disease hellp syndrome and the effects on the neonate visuospatial and verbal fluency relative deficits in "complicated" late-preterm preschool children cognitive deficit in preschoolers born late-preterm one hundred consecutive infants born at 23 weeks and resuscitated predictors of perinatal mortality associated with placenta previa and placental abruption: an experience from a low income country effectiveness of intrapartum antibiotic prophylaxis for earlyonset group b streptococcal infection: an integrative review fetal programming of brain development: intrauterine stress and susceptibility to psychopathology cognitive development in low risk preterm infants at 3-4 years of life early risk, attention, and brain activation in adolescents born preterm a power primer prenatal psychobiological pred i c t o r s associations between preschool language and first grade reading outcomes in bilingual children neonatal outcomes associated with placental abruption breech delivery and intelligence: a population-based study of 8,738 breech infants peabody developmental motor scales high prevalence/low severity language delay in preschool children born very preterm placental efficiency and adaptation: endocrine regulation associations of existing diabetes, gestational diabetes, and glycosuria with offspring iq and educational attainment: the avon longitudinal study of parents and children previoius cesarean delivery and risks of placenta previa and placental abruption the role of the placenta in fetal programming-a review prenatal programming of postnatal susceptibility to memory impairments: a developmental double jeopardy fine motor skills and early comprehension of the world: two new readiness indicators free thyroxine during early pregnancy and risk for gestational diabetes four-factor index of social status prenatal, perinatal and neonatal risk factors for intellectual disability: a systemic review and meta-analysis comparison of vaginal and cesarean section delivery for fetuses in breech presentation meta-analysis of the association between preterm delivery and intelligence a new and improved populationbased canadian reference for birth weight for gestational age neonatal hypoxic risk in preterm birth infants: the influence of sex and severity of respiratory distress on cognitive recovery the association between maternal subclinical hypothyroidism and growth, development, and childhood intelligence: a meta-analysis hypertensive disorders of pregnancy and risk of neurodevelopmental disorders -a systematic review and meta-analysis protocol associations between counting ability in preschool and mathematic performance in first grade among a sample of ethnically diverse, low-income children preschool writing and premathematics predict grade 3 achievement for low-income, ethnically diverse children the consequences of fetal growth restriction on brain structure and neurodevelopmental outcome cognitive function after intrauterine growth restriction and very preterm birth placental adaptive responses and fetal programming which preschool mathematics competencies are most predictive of fifth grade achievement? early child research quarterly school readiness and later achievement: a french canadian replication and extension neonatal outcome in preterm deliveries before 34-week gestation -the influence of the mechanism of labor onset school readiness and later achievement: replication and extension using a nationwide canadian survey perinatal complications and aging indicators by midlife intraclass correlations: uses in assessing rater reliability multilevel analysis: an introduction to basic and advanced multilevel modeling chorioamnionitis occurring in women with preterm rupture of the fetal membranes is associated with a dynamic increase in mrnas coding cytokines in the maternal circulation perceptual-motor, visual and cognitive ability in very low birthweight preschool children without neonatal ultrasound abnormalities intracranial-hemorrhage: germinal matrixintraventricular hemorrhage of the premature infant cumulative biomedical risk and social cognition in the second year of life: prediction and moderation by responsive parenting wechsler primary and preschool scale of intelligencetm united states of america: the psychological corporation the effect of placenta previa on fetal growth and pregnancy outcome, in correlation with placental pathology prenatal, perinatal, and neonatal risk factors for specific language impairment: a prospective pregnancy cohort study clinical evaluation of language fundamentals: preschool-2 woodcock-johnson tests of achievement preclinical chorioamnionitis dysregulates cxcl1/cxcr2 signaling throughout the placentalfetal-brain axis roles of the placenta in fetal brain development acknowledgements the authors thank beth kring and tammy swails for their help in data collection. evaluations and testing materials were funded in part by the merrill-palmer skillman institute. none of the authors has a known conflict of interest concerning this manuscript.funding this work was supported in part by funding from the merrill palmer skillman institute, wayne state university, 71 east ferry, detroit, mi 48202. key: cord-025366-haf542y0 authors: offit, paul a.; destefano, frank title: vaccine safety date: 2012-11-07 journal: vaccines doi: 10.1016/b978-1-4557-0090-5.00076-8 sha: doc_id: 25366 cord_uid: haf542y0 nan during the past 100 years, pharmaceutical companies have made vaccines against pertussis, polio, measles, rubella, and haemophilus influenzae type b (hib), among others ( table 76-1 ) . as a consequence, the number of children in the united states killed by pertussis decreased from 8,000 each year in the early 20th century to fewer than 20; the number paralyzed by polio from 15,000 to 0; the number killed by measles from 3,000 to 0; the number with severe birth defects caused by rubella from 20,000 to 0; and the number with meningitis and bloodstream infections caused by hib from 20,000 to fewer than 300. vaccines have been among the most powerful forces in determining how long we live. 1 but the landscape of vaccines is also littered with tragedy: in the late 1800s, starting with louis pasteur, scientists made rabies vaccines using cells from nervous tissue (such as animal brains and spinal cords); the vaccine prevented a uniformly fatal infection, but the rabies vaccine also caused seizures, paralysis, and coma in as many as 1 of every 230 people who used it. [2] [3] [4] [5] in 1942, the military injected hundreds of thousands of american servicemen with a yellow fever vaccine. to stabilize the vaccine virus, scientists added human serum. unfortunately, some of the serum came from people unknowingly infected with hepatitis b virus. as a consequence, 330,000 soldiers were infected, severe hepatitis developed in 50,000, and 62 died. [6] [7] [8] [9] in 1955, five companies made jonas salk's new formaldehydeinactivated polio vaccine. however, one company, cutter laboratories of berkeley, california, failed to completely inactivate poliovirus with formaldehyde. because of this problem, 120,000 children were inadvertently injected with live, dangerous poliovirus; in 40,000, mild polio developed, 200 were permanently paralyzed, and 10 were killed. it was one of the worst biological disasters in american history. 10 vaccines have also caused uncommon but severe adverse events not associated with production errors. for example, acute encephalopathy after whole-cell pertussis vaccine, 11, 12 acute arthropathy following rubella vaccine, 13-17 thrombocytopenia following measles-containing vaccine, 18, 19 guillain-barré syndrome (gbs) after swine flu vaccine, 20 paralytic polio following live attenuated oral polio vaccine, 21 anaphylaxis following receipt of vaccines containing egg proteins (ie, influenza and yellow fever vaccines), 22, 23 severe or fatal viscerotropic disease following yellow fever vaccine, 24 possible narcolepsy following a squalene-adjuvanted influenza vaccine, 25 and severe allergic reactions associated with gelatin contained in the measles-mumps-rubella vaccine 26 are problems associated with the use of vaccines, albeit rarely. as vaccine use increases and the incidence of vaccine-preventable diseases is reduced, vaccine-related adverse events become more prominent in vaccination decisions ( figure 76-1 ) . even unfounded safety concerns can lead to decreased vaccine acceptance and resurgence of vaccine-preventable diseases, as occurred in the 1970s and 1980s as a public reaction to allegations that the wholecell pertussis vaccine caused encephalopathy and brain damage ( figure 76 -1 ). recent outbreaks of measles, mumps, and pertussis in the united states are important reminders of how immunization delays and refusals can result in resurgences of vaccine-preventable diseases. 27-30 because vaccines are given to healthy children and adults, a higher standard of safety is generally expected of immunizations compared with other medical interventions. tolerance of adverse reactions to pharmaceutical products (eg, vaccines, contraceptives) given to healthy people-especially healthy infants and toddlers-to prevent certain conditions is substantially lower than to products (eg, antibiotics, insulin) used to treat people who are sick. 31 this lower tolerance for risks from vaccines translates into a need to investigate the possible causes of much rarer adverse events after vaccinations than would be acceptable for other pharmaceutical products. for example, side effects are essentially universal for cancer chemotherapy, and 10% to 30% of people receiving high-dose aspirin therapy experience gastrointestinal symptoms. 32 safety monitoring can be done before and after vaccine licensure, with slightly different goals based on the methodological strengths and weaknesses of each step. 33-36 although the general principles are similar irrespective of country, the specific approaches may differ because of factors such as how immunization services are organized and the level of resources available. 37 vaccines, similar to other pharmaceutical products, undergo extensive safety and efficacy evaluations in the laboratory, in animals, and in phased human clinical trials before licensure. 38, 39 phase 1 trials usually include fewer than 20 participants and can detect only extremely common adverse events. phase 2 trials generally enroll 50 to several hundred people. when carefully coordinated, as in the comparative infant diphtheria and tetanus toxoids and acellular pertussis (dtap) vaccine trials, 40 important insight into the relationship between concentration of antigen, number of vaccine components, formulation, effect of successive doses, and profile of common reactions can be drawn and can affect the choice of the candidate vaccines for phase 3 trials. 41, 42 sample sizes for phase 3 vaccine trials are based principally on efficacy considerations, with safety inferences drawn to the extent possible based on the sample size (approximately 10 3 to 10 5 ) and the duration of observation (often < 30 days). 41 typically only observations of common local and systemic reactions (eg, injection site swelling, fever, fussiness) have been feasible. the experimental design of most phase 1 to 3 clinical trials includes a control group (a placebo or an alternative vaccine) and detection of adverse events by researchers in a consistent manner "blinded" to which vaccine the patient received. this allows relatively straightforward inferences on the causal relationship between most adverse events and vaccination. 43 several ways of enhancing prelicensure safety assessment of vaccines have been developed. one of these ways includes the brighton collaboration ( www.brightoncollaboration.org ), established to develop and implement globally accepted standard case definitions for assessing adverse events following immunizations in prelicensure and postlicensure settings. 44 without such standards, it was difficult if not impossible to compare and collate safety data across trials in a valid manner. for example, in the large multisite phase 3 infant dtap trials, definitions of high fever across trials varied by temperature (39.5°c vs 40.5°c), measurement (oral vs rectal), and time (measured at 48 vs 72 hours). 45 this was unfortunate because standardized case definitions had been developed in these trials for efficacy but not for safety, even though the safety concerns provided the original impetus for the development of dtap. 46, 47 the brighton case definitions for each adverse event are further arrayed by the level of evidence provided (insufficient, low, intermediate, and highest); therefore, they also can be used in settings with fewer resources (eg, studies in less developed settings or postlicensure surveillance). another of the recent advances to prelicensure safety assessments of vaccines has stemmed from the recognition of the need for much larger safety and efficacy trials before licensure. because of pragmatic limits on the sample sizes of prelicensure studies, there are inherent limitations to the extent to which they can detect very rare, yet real, adverse events related to vaccination. even if no adverse event has been observed in a trial of 10,000 vaccinees, one can only be reasonably certain that the real incidence of the adverse event is no higher than 1 in 3,333 vaccinees. 48 thus, to be able to detect an attributable risk of 1 per 10,000 vaccinees (eg, such as the approximate risk found for intussusception in the postlicensure evaluation of rotashield vaccine), a prelicensure trial of at least 30,000 vaccinees and 30,000 control subjects is needed. both second-generation rotavirus vaccines (rotateq and rotarix) were subjected to phase 3 trials that included at least 60,000 infants. 49, 50 while these trials were adequately powered to detect the problem with intussusception found following rotashield, in general, the cost of such large trials might limit the number of vaccine candidates that go through this process in the future. 51 because rare reactions, reactions with delayed onset, or reactions in subpopulations may not be detected before vaccines are licensed, postlicensure evaluation of vaccine safety is critical. historically, this evaluation has relied on passive surveillance and ad hoc epidemiologic studies, but, more recently, phase 4 trials and preestablished large linked databases have improved the methodological capabilities to study rare risks of specific immunizations. 43 such systems may detect variation in rates of adverse events by manufacturer 52,53 or specific lot. 54 more recently, clinical centers for the study of immunization safety have emerged as another useful infrastructure to advance our knowledge about safety. 55 in contrast with the methodological strengths of prelicensure randomized trials, however, postlicensure observational studies of vaccine safety pose a formidable set of methodological difficulties. 56 confounding by contraindication is especially problematic for nonexperimental designs. specifically, persons who do not receive vaccine (eg, because of a chronic or transient medical contraindication or low socioeconomic group) may have a different risk for an adverse event than vaccinated persons (eg, background rates of seizures or sudden infant death syndrome (sids) may be higher in unvaccinated people). therefore, direct comparisons of vaccinated and unvaccinated children are often inherently confounded, and teasing this issue out requires understanding of the complex interactions of multiple, poorly quantified factors. informal or formal passive surveillance or spontaneous reporting systems (srss) have been the cornerstone of most postlicensure safety monitoring systems because of their relative low cost of operations. 57-59 the national reporting of adverse events following immunizations can be done through the same reporting channels as those used for other adverse drug reactions, 59 as is the practice in france, 60 76 vaccine manufacturers also maintain srss for their products, which are usually forwarded subsequently to appropriate national regulatory authorities. 38, 73 in the united states, the national childhood vaccine injury act of 1986 mandated that health care providers report certain adverse events after immunizations. 77 the vaccine adverse events reporting system (vaers) was implemented jointly by the centers for disease control and prevention (cdc) and the us food and drug administration (fda) in 1990 to provide a unified national focus for collection of all reports of clinically significant adverse events, including, but not limited to, those mandated for reporting. 76 the vaers form permits narrative descriptions of adverse events. patients and their parents-not just health care professionals-are permitted to report to vaers, and there is no restriction on the interval between vaccination and symptoms that can be reported. report forms, assistance in completing the form, and answers to other questions about vaers are available on the vaers web site (vaers.hhs.gov). web-based reporting and simple data analyses are also available. a contractor, under cdc and fda supervision, distributes, collects, codes (currently using the medical dictionary for regulatory activities (www.meddramsso.com/index.asp), and enters vaers reports in a database. reporters of selected serious events are contacted by trained clinical staff on report receipt and are sent letters at 1 year after report receipt to provide additional information about the vaers report, including the patient's recovery. approximately 30,000 vaers reports are now received annually, and these data (without personal identifiers) are also available to the public (at vaers.hhs.gov and at wonder.cdc.gov/vaers.html). several other countries also have substantial experience with passive surveillance for immunization safety. in 1987, canada developed the vaccine associated adverse event (vaae) reporting system, 67,78 which is supplemented by an active, pediatric hospital-based surveillance system that searches all admissions for possible relationships to immunizations (immunization monitoring program-active, or impact). 79 serious vaae reports are reviewed by the advisory committee on causality assessment consisting of a panel of experts. 80 the netherlands also convenes an annual panel to categorize reports, which are then published. 74 the united kingdom and most members of the former commonwealth use the yellow card system, whereby a reporting form is attached to officially issued prescription pads. 58,63 data on adverse drug (including vaccine) events from several countries are compiled by the world health organization (who) collaborating center for international drug monitoring in uppsala. 81 with so many different passive surveillance systems that collect information on various medical events following vaccination, standardized definitions of vaccine-related adverse events are necessary. in the past, different definitions were developed in brazil, 75 canada, 67 india, 70 and the netherlands. 74 however, implementation of similar standards across national boundaries has been advanced by the international conference on harmonization 82 and the brighton collaboration. 44 vaers often first identifies potential new vaccine safety problems because of clusters of cases in time or space, often with unusual clinical features. for example, in 1999, passive reports to vaers of intussusception among children vaccinated with rotashield was the first postlicensure signal of a problem, 83 leading to epidemiologic studies that verified these findings. 84, 85 similarly, initial reports to vaers of a previously unrecognized serious yellow fever vaccine-associated neurotropic disease 86 and viscerotropic disease 87,88 have since been confirmed elsewhere. 89 because of the success in detecting these signals, there have been various attempts to automate screening for signals using srss reports. new tools developed for pattern recognition in extremely large databases are beginning to be applied. 90 these include empirical bayesian data mining to identify unexpectedly frequent vaccine-event combinations. 91 vaers has provided some of the first safety data after the introduction of a number of vaccines. 92-95 vaers has also successfully served as a source of cases for further investigation of idiopathic thrombocytopenic purpura after measles-mumpsrubella (mmr) vaccine, 96 encephalopathy after mmr, 67,97 and syncope after immunization. 98 when denominator data on vaccine doses distributed or administered are available from other sources, vaers can be used to evaluate changes in reporting rates over time or when new vaccines replace old vaccines. for example, vaers showed that after millions of doses had been distributed, reporting rates for serious events such as hospitalization and seizures after dtap in toddlers were one third of those after diphtheria and tetanus toxoids and whole-cell pertussis (dtp). 99 because vaers is the only surveillance system covering the entire us population with data available on a relatively timely basis, it is the major means available currently to detect possible new, unusual, or extremely rare adverse events. despite the aforementioned uses, srss for drug and vaccine safety have a number of major methodological weaknesses. underreporting, biased reporting, and incomplete reporting are inherent to all such systems, and potential safety concerns may be missed. 100-102 aseptic meningitis associated with the urabe mumps vaccine strain, for example, was not detected by srss in most countries. 103,104 some increases in adverse events detected by vaers may not be true increases, but instead may be due to increases in reporting efficiency or vaccine coverage. for example, an increase in gbs reports after influenza vaccination during the 1993 to 1994 season was found to be largely due to improvements in vaccine coverage and increases in gbs independent of vaccination. 105 an increased reporting rate of an adverse event after one hepatitis b vaccine compared with a second brand was likely due to differential distribution of brands in the public vs private sectors, which have differential vaers reporting rates (higher in the public sector). 106 finally, pending litigation resulted in the filing of a large number of vaers reports claiming that vaccines caused autism. 107 perhaps the most important methodological weakness of vaers, however, is that it does not contain the information necessary for formal epidemiologic analyses. such analyses require calculation of the rate of the adverse event after vaccination and a comparison rate among unvaccinated persons. the vaers database, however, provides data only for the number of persons who may have experienced an adverse event following immunization and, even then, only in a biased and underreported manner. vaers lacks data on the denominator of total number of people vaccinated and the corresponding data on number of cases and denominator population of unvaccinated people. sometimes reporting rates can be calculated by using vaers case reports for the numerator and, if available, doses of vaccines administered (or, if unavailable, data on vaccine doses distributed or vaccine coverage survey data) for the denominator. these rates can then be compared with the background rate of the same adverse event in the absence of vaccination, if available. because of underreporting, however, vaers reporting rates will usually be lower than the actual rates of adverse events following immunization. a higher proportion of serious events, such as seizures, that follow vaccinations are likely to be reported to vaers than milder events, such as rash, or delayed events requiring laboratory assessment, such as thrombocytopenic purpura after mmr vaccination. 100 the reporting efficiency or sensitivity of srss can sometimes be estimated if an independent source of cases of specific adverse events following immunization is available to conduct capture-recapture analyses. such an analysis was conducted to estimate that vaers reporting completeness for intussusception following rotashield vaccine was 47%. 108 formal evaluation has been limited by the quality of diagnostic information on vaers reports, especially the probability that a serious event reported to vaers has been diagnosed accurately. of 26 cases reported to vaers in which gbs developed after influenza vaccination during the 1990 to 1991 season, and for which hospital charts were reviewed by an independent panel of neurologists blinded to immunization status, the diagnosis of gbs was confirmed in 22 (85%). 109 intussusception was verified in 88% of vaers reports filed after rotashield vaccination. 83 clinical reviews of vaers reports submitted following 2009 h1n1 influenza vaccine were able to verify 56% of possible gbs reports and 42% of reports of possible anaphylaxis. 110 clinical review verification rates were similar for vaers reports following human papillomavirus vaccination: 57% for gbs and 38% for anaphylaxis. 95 these studies highlight the often crude nature of signals generated by vaers and the difficulty in ascertaining which potential vaccine safety concerns warrant further investigation. the problems with reporting efficiency and potentially biased reporting and the inherent lack of an adequate control group limit the certainty with which conclusions can be drawn. recognition of these limitations in large part has helped stimulate the creation of more population-based methods of assessing vaccine safety. vaccines may undergo clinical trials after licensure to assess the effects of changes in vaccine formulation, 111 vaccine strain, 112 age at vaccination, 113 number and timing of vaccine doses, 114 simultaneous administration, 115 and interchangeability of vaccines from different manufacturers on vaccine safety and immunogenicity. 116 unanticipated differential mortality among recipients of high-and regular-titered measles vaccine in developing countries (albeit lower than among unvaccinated children) 117 led to a change in recommendations by the who for the use of such vaccines. 118 to improve the ability to detect adverse events that are not detected during prelicensure trials, some recently licensed vaccines in developed countries have undergone formal phase 4 surveillance studies on populations with sample sizes that have included as many as 100,000 people. these studies usually have used cohorts in managed care organizations (mcos) supplemented by diary or phone interviews. these methods were first used extensively after the licensure of polysaccharide and conjugated hib vaccines. 119-121 large postlicensure studies on safety and efficacy have also been conducted for several other vaccines, including those for dtap, 46 varicella, and herpes zoster. 122, 123 requirements for phase 4 evaluation have even been extended to less frequently used vaccines, such as japanese encephalitis vaccine. 124 historically, ad hoc epidemiologic studies have been used to assess signals of potential adverse events detected by srss, the medical literature, or other mechanisms. some examples of such studies include the investigations of poliomyelitis after inactivated 10, 125 and oral 126 polio vaccines, sids after dtp vaccination, 127-130 encephalopathy after dtp vaccination, 131, 132 meningoencephalitis after mumps vaccination, 133 injection site abscesses after vaccination, 134 and gbs after influenza vaccination. 20, 105, 109 the institute of medicine (iom) has compiled and reviewed many of these studies. 11, 135 unfortunately, such ad hoc studies are often costly, timeconsuming, and limited to assessment of a single event or a few events or outcomes. given these drawbacks and the methodological limitations of passive surveillance systems (such as described for vaers), pharmacoepidemiologists began to turn to large databases linking computerized pharmacy prescription (and later immunization records) and medical outcome records. 102 these databases derive from defined populations such as members of mcos, single-provider health care systems, and medicaid programs. such databases cover enrollee populations numbering from thousands to millions, and, because the data are generated from the routine administration of the full range of medical care, underreporting and recall bias are reduced. with denominator data on doses administered and the ready availability of appropriate comparison (ie, unvaccinated) groups, these large databases provide an economical and rapid means of conducting postlicensure studies of safety of drugs and vaccines. 103, [136] [137] [138] [139] the cdc initiated the vaccine safety datalink (vsd) project in 1990 136 to conduct postmarketing evaluations of vaccine safety and to establish an infrastructure allowing for highquality research and surveillance. selection of staff-model prepaid health plans minimized potential biases for more severe outcomes resulting from data generated from fee-for-service claims. currently, eight mcos in the united states participate in the vsd. the eight participating mcos comprise a population of more than 9 million members. each mco prepares computerized data files using a standardized data dictionary containing demographic and medical information on their members, such as age and sex, health plan enrollment, vaccinations, hospitalizations, outpatient clinic visits, emergency department visits, urgent care visits, and mortality data, as well as additional birth information (eg, birth weight) when available. other information sources, such as medical chart review; member surveys; and pharmacy, laboratory and radiology data are often used in vsd studies to validate outcomes and vaccination data. there is rigorous attention to the maintenance of patient confidentiality, and each study undergoes institutional review board review. the vsd project's main priorities include evaluating new vaccine safety concerns that may arise from the medical literature, 11,135 from vaers, 85,106 from changes in immunization schedules, 140 or from introduction of new vaccines. 120,121 the creation of near real-time data files has enabled the development of near real-time postmarketing surveillance for newly licensed vaccines and changes in vaccine recommendations. the size of the vsd population also permits separation of the risks associated with individual vaccines from those associated with vaccine combinations, whether given in the same syringe or simultaneously at different body sites. for example, vsd safety monitoring found that the combined mmrv vaccine carried an increased risk of febrile seizures compared with giving mmr and varicella vaccines simultaneously as separate injections. 141 such studies are especially valuable in view of combined pediatric vaccines. 142 more than 130 studies have been or are being performed within the vsd project, 139 including general screening studies of the safety of inactivated influenza vaccines among children and of thimerosal-containing vaccines. disease-or syndrome-specific investigations have been or are being performed, including studies investigating autism, multiple sclerosis, thyroid disease, acute ataxia, alopecia, rheumatoid arthritis, asthma, diabetes, and idiopathic thrombocytopenic purpura following vaccination. amid these promises, a few caveats are appropriate. although diverse, the population in the mcos currently in the vsd project is not wholly representative of the united states in terms of geography or socioeconomic status. more important, because of the high coverage attained in the mcos for most vaccines, few nonvaccinated control subjects are available. therefore, vsd studies often rely on risk-interval analyses (eg, to study the question of whether outcome "x" is more common in period "y" following vaccination compared with other periods) (table 76 -2). 143 this approach, although powerful for evaluating acute adverse events, has limited ability to assess associations between vaccination and adverse events with delayed or insidious onset (eg, autism). the vsd project also cannot easily assess mild adverse events (such as fever) that do not always come to medical attention. 136 finally, because vac-cines are not delivered in the context of randomized, controlled trials, the vsd project may not be able to successfully control for confounding and bias in each analysis, 144 and inferences on causality may be limited. 145 despite these potential shortcomings, the vsd project provides an essential, powerful, and cost-effective complement to ongoing evaluations of vaccine safety in the united states. 139, 139 in view of the methodological and logistic advantages offered by large linked databases, the united kingdom and canada also have developed systems linking immunization registries with medical files. 79,103 because of the relatively limited number of vaccines used worldwide and the costs associated with establishing and operating these large databases, it is unlikely that all countries will be able to or need to establish their own. these countries should be able to draw on the scientific base established by the existing large linked databases for vaccine safety and, if the need arises, conduct ad hoc epidemiologic studies. more recently, there has been an increasing awareness that the usefulness of srss as potential disease registries and the immunization safety infrastructure can be usefully augmented by tertiary clinical centers. well-organized, well-identified clinical infrastructures for the study of rare vaccine safety outcomes were first developed in certain regions in italy 146 and australia. 147, 148 in the united states, the cdc established the clinical immunization safety assessment (cisa) network in 2001 with the following primary goals: (1) to develop research protocols for clinical evaluation, diagnosis, and management of adverse events following immunization (aefi); (2) to improve the understanding of aefi at the individual level, including determining possible genetic and other risk factors for predisposed persons and high-risk subpopulations; (3) to develop evidence-based algorithms for vaccination of persons at risk of serious adverse events following immunization; and (4) to provide a resource of subject matter experts for clinical vaccine safety inquiries. 36 the cisa investigators bring in-depth clinical, pathophysiologic, and epidemiologic expertise to assessing causal relationships between vaccines and adverse events and to understanding the pathogenesis of adverse 1. define biologically plausible risk interval for adverse event after vaccination (eg, 30 days after each dose). 2. partition observation time for each child in the study into periods within and outside of risk intervals, and sum respectively (eg, for a child observed for 365 days during which three doses of vaccine were received, total risk interval time = 3 × 30 person-days = 90 persondays; total nonrisk interval time = 365 − 90 = 275 person-days). birth dose 1 dose 2 dose 3 365 days 3. add (a) total risk interval and nonrisk interval observation times for each child in the study (person-time observed; for mathematical convenience, the following example uses 100 and 1,000 person-months of observation) and (b) adverse events occurring in each period to complete a 2 × 2 table (for illustration, the example uses 3 and 10 cases): events following vaccinations. the cisa investigators have published a standardized algorithm for evaluating and managing persons who have suspected or definite immediate hypersensitivity reactions such as urticaria, angioedema, and anaphylaxis following vaccines. 149 some of the studies undertaken by cisa include an assessment of extensive limb swelling after dtap, 150 a study of the usefulness of irritant skin test reactions for managing hypersensitivity to vaccines, 151 the clinical evaluation of patients with serious adverse events following yellow fever vaccine administration, 152 and evaluation of vaccine safety among children with inborn errors of metabolism. 153 new understanding of the human genome, pharmacogenomics, and immunology hold promise for future cisa studies and may make it possible to elucidate the biological mechanisms of vaccine adverse reactions, which in turn could lead to the development of safer vaccines and safer vaccination practices, including revaccination when indicated. 154 in mass immunization campaigns during which many people are vaccinated in a short time, it is critical to have a vaccine safety monitoring system in place that can detect potential safety problems early so that corrective actions can be taken as soon as possible. mass immunization campaigns pose specific safety challenges precisely because large populations are vaccinated during a short time and often they are conducted outside the usual health care setting. 155 mass immunization campaigns are often conducted in developing countries, which poses a particular challenge of ensuring injection safety. 156 in any setting in which large numbers of immunizations are being administered, more adverse events will coincidentally occur following immunization. thus, it is important to have background rates available of expected adverse events to allow rapid evaluation of whether reported adverse events are occurring at a rate following immunization that is higher than would be expected by chance alone. the resources devoted to mass vaccination campaigns also provide opportunities to enhance existing immunization safety monitoring systems or to establish a system if none exists, and these may lead to long-term improvements in immunization safety monitoring beyond the specific mass immunization campaign. the response to the 2009 h1n1 influenza pandemic involved probably the largest and most intense immunization safety monitoring effort ever undertaken in the united states and internationally. the emergence of a novel influenza a (h1n1) virus prompted the development of 2009 influenza a (h1n1) monovalent vaccines. the fda licensed the first 2009-h1n1vaccines in september 2009. with potentially hundreds of millions of people expected to be vaccinated, adverse events were anticipated to occur in some recently vaccinated people. to address the question of whether the vaccine could be causing the adverse events, background rates for several adverse events were developed. 157 to rapidly detect any unforeseen safety problems, the federal government implemented enhanced postlicensure 2009-h1n1 vaccine safety monitoring. 158 first, vaers undertook special outreach efforts to encourage providers to report, and daily reviews and followup of submitted reports were conducted by medical personnel to rapidly evaluate the reports and obtain any needed additional clinical or other information. second, a new web-based active surveillance system was implemented to prospectively follow tens of thousands of vaccinees for medically attended adverse events. third, large population-based systems that link computerized vaccination data with health care encounter codes were used to conduct rapid ongoing analyses to evaluate possible associations of h1n1 vaccination with selected adverse events, including potential associations suggested by vaers or other sources. such systems included the existing vsd project; a new collaboration involving additional large health plans covering several million people that also performed rapid ongoing analyses similar to vsd; and the databases of the department of defense, medicare, and the veterans administration. fourth, active case finding for gbs was conducted in 10 areas of the united states with a combined population of about 50 million. the findings from the various safety monitoring activities were regularly reviewed by government and other scientists and an independent vaccine safety review panel convened by the department of health and human services. initial safety data were provided by vaers, which found that the adverse event profile after 2009-h1n1 vaccine in vaers ( > 10,000 reports) was consistent with that of seasonal influenza vaccines, although the reporting rate was higher after 2009-h1n1 than seasonal influenza vaccines, which may be, at least in part, a reflection of stimulated reporting; death, gbs, and anaphylaxis reports after 2009-h1n1 vaccination were rare (each < 2 per million doses administered). 110, 158 preliminary results from the large special study of gbs found 0.8 excess cases of gbs per 1 million vaccinations, which is similar to the increased risk found with some seasonal influenza vaccines. 159 similar efforts to intensely monitor the safety of influenza a (h1n1) 2009 vaccines occurred in other countries, primarily in north america, europe, and australia, but also included the development of new immunization safety monitoring systems in countries such as taiwan. 160 these countries collaborated in their activities and routinely shared information among themselves and with other countries that have limited vaccine safety monitoring capabilities. these extensive international safety monitoring activities and collaborations represented an unprecedented commitment to ensuring the safety of influenza a (h1n1) 2009 vaccines, as well as a model for how we might improve tracking of safety for all vaccines going forward. unfortunately, vaccine safety issues have increasingly taken on a life of their own outside of the scientific arena-arguably to society's overall detriment. liability concerns, for example, have severely limited development of maternal immunizations to protect their newborn infants against diseases such as from group b streptococcus . 161 more worrisome, however, are various chronic diseases (and their advocates) in search of a simple cause, for which immunizations-as a relatively universal exposuremake all too convenient a hypothesized link. case studies of some of these fears are discussed in the following sections. in 1974, kulenkampff and coworkers 162 reported a series of 22 cases of children with mental retardation and epilepsy following receipt of the whole-cell pertussis vaccine. during the next several years, fear of the pertussis vaccine generated by media coverage of this report caused a decrease in pertussis immunization rates in british children from 81% to 31% and resulted in more than 100,000 cases and 36 deaths due to pertussis. 163 media coverage of the kulenkampff report also caused decreased immunization rates and increased pertussis deaths in japan, sweden, and wales. 163 however, many subsequent excellent well-controlled studies found that the incidence of mental retardation and epilepsy following whole-cell pertussis vaccine was similar in vaccinated children compared with children who did not receive the vaccine and that many of these children actually suffered from dravet's syndrome (a neuronal sodium channel transport defect caused by an scn1a mutation). 166-171,171a in the mid-1980s, the antivaccine group called dissatisfied parents together raised the notion that the whole-cell pertussis vaccine could cause sids. subsequent study of children who did or did not receive dtp vaccine showed that the incidence of sids was not greater in the vaccinated group. 143 in the early 1990s, when the hepatitis b vaccine was recommended for routine use in newborns, a program on abc's 20/20 raised the question of whether vaccines could cause sids. again, studies failed to find any association between hepatitis b vaccine and sids. 130,170,171 two recent reviews have confirmed the notion that vaccines do not cause sids. 172, 173 vaccines cause mad-cow disease by july 2000, at least 73 people in the united kingdom developed a progressive neurological disease termed variant creutzfeldt-jakob disease that likely resulted from eating meat prepared from cows with "mad-cow" disease, a disease caused by proteinaceous infectious particles (prions). some vaccines were made with serum or gelatin obtained from cows in england or from countries at risk for mad-cow disease. two products obtained from cows may be present in vaccines: trace quantities of fetal bovine serum used to provide growth factors for cell culture and gelatin used to stabilize vaccines. however, the bovine-derived products used in vaccines are not likely to contain prions for several reasons. 174 first, fetal bovine serum and gelatin are obtained from blood and connective tissue respectively; neither are sources that have been found to contain prions. second, fetal bovine serum is highly diluted and eventually removed from cells during the growth of vaccine viruses. third, prions are not propagated in cell cultures used to make vaccines. fourth, transmission of prions occurs from eating meat contaminated with nervous tissue obtained from infected animals or, in experimental studies, from directly inoculating preparations of brains from infected animals into the brains of experimental animals. transmission of prions has not been documented after inoculation into the muscles or under the skin (routes used to vaccinate). taken together, the chance that currently licensed vaccines contain prions is essentially zero. the notion that the origin of aids could be traced to poliovirus vaccines that were administered in the belgian congo between 1957 and 1960 was the subject of a popular magazine article 175 and book. 176 the logic behind this assertion was as follows: (1) the polio vaccine used in the belgian congo was grown in chimpanzee kidney cells. (2) the chimpanzee kidney cells used at that time contained simian immunodeficiency virus (siv). (3) siv is very closely related to human immunodeficiency virus (hiv). (4) people were inadvertently inoculated with siv that then mutated to hiv and caused the aids epidemic. this reasoning is problematic and based on several false assumptions. 177-180 first, siv most closely related to hiv has been demonstrated in chimps in the cameroon, far from the chimps near stanleyville that were used to make the vaccine. second, siv and hiv are not very close genetically; mutation to hiv from siv would likely require decades, not years. third, polymerase chain reaction (pcr) analysis showed that the cell substrate used to make the vaccine was monkey, not chimp. fourth, siv and hiv are enveloped viruses that are easily disrupted by extremes in ph. if given by mouth (in a manner similar to the oral polio vaccine), both of these viruses would likely be destroyed in the acid environment of the stomach. last, and most important, original lots of the polio vaccine (including those used in africa for the polio vaccine trials) did not contain hiv or siv genomes as determined by the very sensitive reverse-transcription pcr assay. unfortunately, the notion that live attenuated polio vaccine could cause aids remains an obstacle to eliminating polio in some countries in africa. simian virus 40 (sv40) was present in monkey kidney cells used to make the inactivated polio vaccine, live attenuated polio vaccine, and inactivated adenovirus vaccines in the late 1950s and early 1960s. recently, investigators found sv40 dna in biopsy specimens obtained from patients with certain unusual cancers (ie, mesothelioma, osteosarcoma, and non-hodgkin lymphoma), leading some to hypothesize a link between vaccination and the subsequent development of cancer. 181 however, genetic remnants of sv40 were present in cancers of people who had or had not received contaminated polio vaccines; people with cancers who never received sv40-contaminated vaccines were found to have evidence for sv40 in their cancerous cells; and epidemiologic studies did not show an increased risk of cancers in people who received polio vaccine between 1955 and 1963 and people who did not receive these vaccines. 181 taken together, these findings do not support the hypothesis that the sv40 contained in polio vaccines administered before 1963 caused cancers. one hundred years ago, children received one vaccinesmallpox. today, young children receive 14 vaccines routinely. although some vaccines are given in combination, infants and young children could receive more than 20 shots and three oral doses by 2 years of age, including as many as five shots at one time. the increase in the number of vaccines, and the consequent decline in vaccine-preventable illnesses, has focused attention by parents and health care professionals on vaccine safety. specific concerns include whether vaccines weaken, overwhelm, 182,183 or in some way alter the normal balance of the immune system, paving the way for chronic diseases such as diabetes, asthma, multiple sclerosis, and allergies. although we have witnessed a dramatic increase in the number of vaccines routinely recommended for infants and young children, the number of immunogenic proteins and polysaccharides contained in vaccines has declined (table 76 -3). the decrease in the number of immunogenic proteins and polysaccharides contained in vaccines is attributable to discontinuation of the smallpox vaccine and advances in the field of protein purification that allowed for a switch from whole-cell to acellular pertussis vaccine. a practical way to determine the capacity of the immune system to respond to vaccines would be to consider the number of b and t cells required to generate adequate levels of binding antibodies per milliliter of blood. 184 calculations are based on the following assumptions: -approximately 10 ng/ml is likely to be an effective concentration of antibody directed against a specific epitope. -approximately 10 3 b cells/ml are required to generate 10 ng of antibody/ml. -given a doubling time of about 0.75 days for b cells, it would take about 7 days to generate 10 3 b cells/ml from a single b-cell clone. -because vaccine-specific humoral immune responses are first detected about 7 days after immunization, those responses could initially be generated from a single b-cell clone per milliliter. -one vaccine contains about 10 immunogenic proteins or polysaccharides ( table 76 -3 ). -each immunogenic protein or polysaccharide contains about 10 epitopes (ie, 10 2 epitopes per vaccine). -approximately 10 7 b cells are present per milliliter of blood. given these assumptions, the number of vaccines to which a person could respond would be determined by dividing the number of circulating b cells (approximately 10 7 /ml) by the average number of epitopes per vaccine (10 2 ). therefore, a person could theoretically respond to about 10 5 vaccines at one time. the analysis used to determine the theoretical capacity of a person to respond to as many as 10 5 vaccines at one time, although consistent with the biology and kinetics of vaccine-specific immune responses, is limited by lack of consideration of several factors. first, only vaccine-specific b-cell responses are considered. however, protection against disease by vaccines may also be mediated by vaccine-specific cytotoxic t lymphocytes (ctls). for example, virus-specific ctls are important in the regulation and control of varicella infections. 185 second, in part because of differences in the capacity of various class i or class ii glycoproteins (encoded by the mhc) to present viral or bacterial peptides to the immune system, some people are not capable of responding to certain virus-specific proteins (eg, hepatitis b surface antigen). 186 third, some proteins are more likely to evoke an immune response than others (ie, immunodominance). fourth, although most circulating b cells in a neonate are naïve, the child very quickly develops memory b cells that are not available for response to new antigens and, therefore, should not be considered as part of the circulating naïve b-cell pool. fifth, the immune system is not static. a study of t-cell population dynamics in hiv-infected persons found that adults have the capacity to generate about 2 × 10 9 new t lymphocytes each day. 187 although the quantity of new b and t cells generated each day in healthy people is unknown, studies of hiv-infected persons demonstrate the enormous capacity of the immune system to generate lymphocytes when needed. primarily because of this fifth reason, the assessment that people can respond to at least 10 5 vaccines at one time might be low. within hours of birth, cells of the innate and adaptive immune systems are actively engaged in responding to challenges in the environment (eg, colonizing bacterial flora). 188, 189 similarly, newborn and young infants are quite capable of generating protective immune responses to single and multiple vaccines. for example, children born to mothers infected with hepatitis b virus are protected against infection after inoculation with hepatitis b vaccine (given at birth and 1 month of age). [190] [191] [192] similarly, newborns inoculated with bacille calmette-guérin (bcg) vaccine are protected against severe forms of tuberculosis presumably by activation of bacteria-specific t cells. [193] [194] [195] in addition, about 90% to 95% of infants inoculated in the first 6 months of life with multiple vaccines, including diphtheria-tetanus-pertussis, pneumococcus, hib, hepatitis b and polio, develop protective, vaccine-specific immune responses. 196 conjugation of bacterial polysaccharides (such as streptococcus pneumoniae and hib) to carrier molecules that elicit helper t cells circumvents the poor immunogenicity of unconjugated polysaccharide vaccines in infants and young children. 197, 198 vaccines weaken the immune system infection with wild-type viruses can cause a suppression of specific immunologic functions. for example, infection with wild-type measles virus causes a reduction in the number of circulating b and t cells during the viremic phase of infection and a delay in the development of cell-mediated immunity. 199, 200 downregulation of cell-mediated immunity by wild-type measles virus probably results from downregulation of the production of interleukin-12 by measles-infected macrophages and dendritic cells. 199 taken together, the immunosuppressive effects of wild-type measles virus account, in part, for the increase in morbidity and mortality from measles infection. similarly, the immunosuppressive effects of infections with wild-type varicella virus 201 or wild-type influenza virus 202 cause an increase in the incidence of severe invasive bacterial infections. live viral vaccines replicate (albeit far less efficiently than wild-type viruses) in the host and, therefore, can weakly mimic events that occur after natural infection. for example, measles, mumps, or rubella vaccines can significantly depress reactivity to the tuberculin skin test, 203 can cause a decrease in protective immune responses to varicella vaccine, 210 and high-titered measles vaccine (edmonston-zagreb strain) can cause an excess of cases of invasive bacterial infections in developing countries. 211 all of these phenomena are explained by the likely immunosuppressive effects of measles vaccine viruses. however, current vaccines (including the highly attenuated moraten strain of measles vaccine) do not seem to cause clinically relevant immunosuppression in healthy children. studies have found that the incidence of invasive bacterial infections following immunization with diphtheria, pertussis, tetanus, bcg, measles, mumps, rubella, or live attenuated poliovirus vaccines was not greater than that found in unimmunized children. [212] [213] [214] [215] [216] vaccines cause autoimmunity mechanisms are present at birth to prevent the development of immune responses directed against self-antigens (autoimmunity). t-and b-cell receptors of the fetus and newborn develop with a random repertoire of specificities. in the thymus, t cells that bind strongly to self-peptide-mhc complexes die, while those that bind with a lesser affinity survive to populate the body. this central selection process eliminates strongly selfreactive t cells, while selecting for t cells that recognize antigens in the context of self-mhc. in the fetal liver, and later in the bone marrow, b-cell receptors (ie, immunoglobulins) that bind self-antigens strongly are also eliminated. therefore, the thymus and bone marrow, by expressing antigens from many tissues of the body, enable the removal of the majority of potentially dangerous autoreactive t and b cells before they maturea process termed central tolerance. 217 however, it is not simply the presence of autoreactive t and b cells that result in autoimmune disease. autoreactive t and b cells are present in all people because it is not possible for every antigen from every tissue of the body to participate in the elimination of all potentially autoreactive cells. a process termed peripheral tolerance further limits the activation of autoreactive cells. 218,219 mechanisms of peripheral tolerance include the following: (1) antigen sequestration (antigens of the central nervous system, eyes, and testes are not regularly exposed to the immune system unless injury or infection occurs.); (2) anergy (lymphocytes partially triggered by antigen but without costimulatory signals are unable to respond to subsequent antigen exposure.); (3) activation-induced cell death (a self-limiting mechanism involved in terminating immune responses after antigen is cleared); and (4) inhibition of immune responses by specific regulatory cells. [220] [221] [222] [223] therefore, the immune system anticipates that self-reactive t cells will be present and has mechanisms to control them. any theory of vaccine causation of autoimmune diseases must take into account how these controls are circumvented. as discussed subsequently, epidemiologic studies have not supported the hypothesis that vaccines cause autoimmune diseases. this is consistent with the fact that no mechanisms have been advanced to explain how vaccines could account for all of the prerequisites that would be required for the development of autoimmune disease. at least four key conditions must be met for development of autoimmune disease. first, self-antigen-specific t cells or selfantigen-specific b cells must be present. second, self-antigens must be presented in sufficient amounts to trigger autoreactive cells. third, costimulatory signals, cytokines, and other activation signals produced by antigen-presenting cells (such as dendritic cells) must be present during activation of self-reactive t cells. fourth, peripheral tolerance mechanisms must fail to control destructive autoimmune responses. if all of these conditions are not met, the activation of self-reactive lymphocytes and progression to autoimmune disease are not likely. 224 evidence that vaccines do not cause autoimmunity rigorous epidemiologic studies of infant vaccines and type 1 diabetes found that measles vaccine was not associated with an increased risk for diabetes; other investigations found no association between bcg, smallpox, tetanus, pertussis, rubella, or mumps vaccine and diabetes. 225 a study in canada found no increase in risk for diabetes as a result of receipt of bcg vaccine. 226 in a large 10 year follow-up study among finnish children enrolled in an hib vaccination trial, no differences in risk for diabetes were found among children vaccinated at 3 months of age (followed later with a booster vaccine) and children vaccinated at 2 years only or with children born before the vaccine trial. the weight of currently available epidemiologic evidence does not support a causal association between currently recommended vaccines and type 1 diabetes in humans. [227] [228] [229] the hypothesis that vaccines might cause multiple sclerosis was fueled by anecdotal reports of multiple sclerosis following hepatitis b immunization and two case-control studies showing a small increase in the incidence of multiple sclerosis in vaccinated persons that was not statistically significant. 230-232 however, the capacity of vaccines to cause or exacerbate multiple sclerosis has been evaluated in several excellent epidemiologic studies. 233-237 two large case-control studies showed no association between hepatitis b vaccine and multiple sclerosis 234 and found no evidence that hepatitis b, tetanus, or influenza vaccines exacerbated symptoms of multiple sclerosis. 235 other well-controlled studies also found that influenza vaccine did not exacerbate symptoms of multiple sclerosis. 236-238 indeed, in a retrospective study of 180 patients with relapsing multiple sclerosis, infection with influenza virus was more likely than immunization with influenza vaccine to cause an exacerbation of symptoms. 238 a recent review also showed that the novel h1n1 2009 vaccine had an attributable risk for guillain-barré syndrome of 1-2 cases per million doses administered, not higher than that found following the 2009-2010 seasonal influenza vaccine. 239 allergic symptoms are caused by soluble factors (eg, ige) that mediate immediate-type hypersensitivity; production of ige by b cells is dependent on release of cytokines such as interleukin-4 by th2 cells. two theories have been advanced to explain how vaccines could enhance ige-mediated, th2-dependent allergic responses. first, vaccines could shift immune responses to potential allergens from th1-like to th2-like. 240 second, by preventing common prevalent infections (the "hygiene hypothesis"), vaccines could prolong the length or increase the frequency of th2-type responses. 241, 242 although all factors that cause changes in the balance of th1 and th2 responses are not fully known, 243 it is clear that dendritic cells have a critical role. for example, adjuvants (eg, aluminum hydroxide or aluminum phosphate ["alum"] contained in some vaccines) promote dendritic cells to stimulate th2type responses. 244,245 adjuvants could cause allergies or asthma by stimulating bystander, allergen-specific th2 cells. however, vaccine surveillance data show no evidence for environmental allergen priming by vaccination. 246 furthermore, local inoculation of adjuvant does not cause a global shift of immune responses to th1 or th2 type. 247, 248 the other hypothesis advanced to explain how vaccines could promote allergies is that by preventing several childhood infections (the hygiene hypothesis), stimuli that evolution has relied on to cause a shift from the neonatal th2-type immune response to the balanced th1-th2 response patterns of adults have been eliminated. 241,242 however, the diseases that are prevented by vaccines constitute only a small fraction of the total number of illnesses to which a child is exposed, and it is unlikely that the immune system would rely on only a few infections for the development of a normal balance between th1 and th2 responses. for example, a study of 25,000 illnesses performed in cleveland, ohio, in the 1960s found that children experienced six to eight infections per year in the first 6 years of life; most of these infections were caused by viruses such as coronaviruses, rhinoviruses, paramyxoviruses, and myxovirusesdiseases for which children are not routinely immunized. 249 also at variance with the hygiene hypothesis is the fact that children in developing countries have lower rates of allergies and asthma than children in developed countries despite the fact that they are commonly infected with helminths and worms-organisms that induce strong th2-type responses. 250 finally, the incidence of diseases that are mediated by th1-type responses, such as multiple sclerosis and type 1 diabetes, have increased in the same populations as those that experienced an increase in allergies and asthma. although some relatively small early observational studies supported the association between whole-cell pertussis vaccine and development of asthma, 251 more recent studies have suggested otherwise. a large clinical trial performed in sweden found no increased risk, 252 and a very large longitudinal study in the united kingdom found no association between pertussis vaccination and early-or late-onset wheezing or recurrent or intermittent wheezing. 253 two studies from the vsd project have also lent data to this controversy. in one study of 1,366 infants with wheezing during infancy, vaccination with dtp and other vaccines was not related to the risk of wheezing in full-term infants, 254 and, in another study of more than 165,000 children, childhood vaccinations were not associated with an increased risk for developing asthma. 255 finally, a study from finland also suggested that children with a history of natural measles were at increased risk for atopic illness. such findings would run contrary to the hypothesis that the increase in atopic illnesses seen in several countries is due to the reduction in wild measles resulting from immunizations. 256 another separate concern is whether inactivated influenza vaccination may induce asthma exacerbations in children with preexisting asthma. results of studies examining the potential associations between administration of inactivated influenza vaccine and various surrogate measures of asthma exacerbation, including decreased peak expiratory flow rate, increased use of bronchodilating drugs, and increase in asthma symptoms, have yielded mixed results. most studies, however, have not supported such an association. 257 in fact, after controlling for asthma severity, acute asthma exacerbations were less common after inactivated influenza vaccination than before, 258 and inactivated influenza vaccination seems to be associated with a decreased risk for asthma exacerbations throughout influenza seasons. 259 several more recent studies have also shown a lack of correlation between receipt of vaccines and the development of asthma. 260-263 autism is a chronic developmental disorder characterized by problems in social interaction, communication, and responsiveness and by repetitive interests and activities. although the causes of autism are largely unknown, family and twin studies suggest that genetics has a fundamental role. 264 in addition, overexpression of neuropeptides and neurotrophins has been found in the immediate perinatal period among children later diagnosed with autism, suggesting that prenatal or perinatal influences or both have a more important role than postnatal insults. 265 however, because autistic symptoms generally first become apparent in the second year of life, some scientists and parents have focused on the role of mmr vaccine because it is first administered around this time. concern about the role of mmr vaccine was heightened in 1998 when a study based on 12 children proposed an association between the vaccine and the development of ileonodular hyperplasia, nonspecific colitis, and regressive developmental disorders (later termed by some as "autistic enterocolitis"). 266 among the proposed mechanisms was that mmr vaccine caused bowel problems, leading to the malabsorption of essential vitamins and other nutrients and eventually to autism or other developmental disorders. concern about this issue led to a decline in measles vaccine coverage in the united kingdom and elsewhere. 267 significant concerns about the validity of the study included the lack of an adequate control or comparison group, inconsistent timing to support causality (several of the children had autistic symptoms preceding bowel symptoms), and the lack of an accepted definition of the syndrome. 268 subsequently, population-based studies of autistic children in the united kingdom found no association between receipt of mmr vaccine and autism onset or developmental regression. 269, 270 a study in the united states in the vsd project investigated whether measlescontaining vaccine was associated with inflammatory bowel disease and found no relationship between receiving mmr vaccine and inflammatory bowel disease or between the timing of the vaccine and risk for disease. 271 soon after the lancet published the article that ignited the controversy, 266 two ecologic analyses found no evidence that mmr vaccination was the cause of apparent increased trends in autism over time, 272, 273 while two other studies found no evidence of a new variant form of autism associated with bowel disorders secondary to vaccination. 274, 275 several more recent studies have also refuted the notion that mmr vaccine caused autism. [276] [277] [278] [279] [280] [281] in february 2010, the lancet retracted the original article claiming an association. because of the level of concern surrounding this issue, the cdc and the national institutes of health requested an independent review by the iom. 282 the immunization safety review committee appointed by the iom to review this issue was unable to find evidence supporting a causal relationship at the population level between autistic spectrum disorders and mmr vaccination, nor did the committee find any good evidence of biological mechanisms that would support or explain such a link. the fda modernization act of 1997 called for the fda to review and assess the risk of all mercury-containing food and drugs. this led to an examination of mercury content in vaccines. public health officials found that infants up to 6 months old could receive as much as 187.5 µ g of ethylmercury (thimerosal) from vaccines, a level that exceeded recommended safety guidelines for methylmercury from the environmental protection agency, but not levels recommended by the fda or the agency for toxic substance disease registry. 283 consequently, the routine neonatal dose of hepatitis b vaccine in infants born to hepatitis b surface antigen-negative mothers was suspended in the united states until preservative-free vaccines became available, and transitioning to a vaccine schedule free of thimerosal began as a precautionary measure. 284 currently, some multidose influenza vaccines contain preservative quantities (ie, 25 µ g per dose) of thimerosal although thimerosal-free vaccines are available. mercury is a naturally occurring element found in the earth's crust, air, soil, and water. since the earth's formation, volcanic eruptions, weathering of rocks, and burning of coal have caused mercury to be released into the environment. once released, certain types of bacteria in the environment can change inorganic mercury to organic (methylmercury). methylmercury makes its way through the food chain in fish, animals, and humans. at high levels, it can be neurotoxic. thimerosal contains ethylmercury, not methylmercury. studies comparing ethylmercury and methylmercury suggest that they are processed differently; ethylmercury is broken down and excreted much more rapidly than methylmercury. therefore, ethylmercury is much less likely than methylmercury to accumulate in the body and cause harm. 284a several pieces of biological and epidemiologic evidence support the notion that thimerosal does not cause autism. first, in 1971 iraq imported grain that had been fumigated with methylmercury. 285 farmers ate bread made from this grain. the result was one of the worst, single-source, mercury poisonings in history. methylmercury in the grain caused the hospitalization of 6,500 iraqis and killed 450. pregnant women also ate the bread and delivered infants with epilepsy and mental retardation. however, there was no evidence that these infants had an increased incidence of autism. second, several large studies have now compared the risk of autism in children who received vaccines containing thimerosal with children who received vaccines without thimerosal or vaccines with lesser quantities of thimerosal; the incidence of autism was similar in all groups. 286-291 the iom has reviewed these studies and concluded that evidence favored rejection of a causal association between vaccines and autism and that autism research should shift away from vaccines. 292 denmark, a country that abandoned thimerosal as a preservative in 1991, actually saw an increase in the disease beginning several years later. third, studies of the head size, speech patterns, vision, coordination, and sensation of children poisoned by mercury show that the symptoms of mercury poisoning are distinguishable from the symptoms of autism. 293 fourth, methylmercury is found in low levels in water, infant formula, and breast milk. 294 although it is clear that large quantities of mercury can damage the nervous system, there is no evidence that the small quantities contained in water, infant formula, and breast milk do. an infant who is exclusively breastfed for 6 months will ingest more than twice the quantity of mercury that was ever contained in vaccines and 15 times the quantity of mercury contained in the influenza vaccine. one known and unfortunate sequela from the uncertainty surrounding the safety of thimerosal was confusion surrounding administration of the birth dose of hepatitis b vaccine. following the suspension of the routine use of hepatitis b vaccine for low-risk newborns in 1999, there was a marked increase in the number of hospitals that no longer routinely vaccinated all infants at high risk of hepatitis b. 295 as a result, there have been cases of neonatal hepatitis b that could have been prevented but were not because of many hospitals suspending their routine neonatal hepatitis b vaccination program. 296 the hypothesis for why vaccines might cause autism has continued to shift. in 1998, the concern was that the mmr vaccine caused autism. the following year, the concern shifted to include the fear that thimerosal in vaccines caused autism. as data continued to be generated showing that both of these concerns were ill founded, the hypothesis shifted again-this time to include the fear that too many vaccines given too soon caused autism. to address this concern, michael smith and charles woods mined data from a previous study that had been performed by cdc researchers to determine whether thimerosal in vaccines was associated with an increased risk of autism or neurodevelopmental delays. 297 smith and woods compared children who had received vaccines according to the cdc/american academy of pediatrics schedule with children for whom a decision was made to delay, withhold, separate, or space out vaccines, noting no difference between the two groups in neurodevelopmental outcomes. 298 aluminum salts have safely been used to adjuvant vaccines since the 1930s. however, by the mid-2000s, parents became concerned that aluminum in vaccines might be harmful. indeed, high levels of aluminum can cause local inflammatory reactions, osteomalacia, anemia, or encephalopathy, typically in preterm infants or infants with absent or severely compromised renal function who are also receiving high doses of aluminum from other sources (eg, antacids). 299 studies have shown that children who receive aluminum-containing vaccines have serum levels of aluminum that are well below the toxic range. [300] [301] [302] formaldehyde in vaccines is harmful formaldehyde has been used in vaccines to detoxify bacterial toxins (ie, diphtheria toxin, tetanus toxin, pertussis toxins) and to inactivate viruses (ie, poliovirus). because formaldehyde at high concentrations can cause mutational changes in cellular dna in vitro, 303 some parents have become concerned that formaldehyde in vaccines might be dangerous. however, because formaldehyde is a product of single-carbon metabolism, everyone has formaldehyde detectable in serum. 304 indeed, the level of formaldehyde in the circulation is about 10-fold more than would be contained in any vaccine. 305 also, people exposed to high levels of formaldehyde in the workplace (eg, morticians) are not at greater risk of cancer than people who are not exposed to formaldehyde. 306 finally, the quantity of formaldehyde present in vaccines is at least 600-fold lower than that necessary to induce toxicity in experimental animals. 307 two cell lines, mrc-5 and wi-38, both derived from elective abortions performed in europe in the early 1960s, have been used as cell substrates in vaccine manufacture. four vaccines continue to require the use of these cell lines: varicella, rubella, hepatitis a, and one of the rabies vaccines. human fetal cells were valuable in vaccine research because they support the growth of many human viruses and are sterile; they were first used at around the time that researchers found that primary monkey kidney cells were contaminated with sv40 virus. some religious groups have become concerned about the use of cells originally obtained from elective abortions. however, the pontifical academy of life of the catholic church has deemed vaccines made using these cells worthy of continued use, despite their origins. 308 disease prevention, especially if it requires continuous nearuniversal compliance, is a formidable task. in the preimmunization era, vaccine-preventable diseases such as measles and pertussis were so prevalent that the risks and benefits of disease vs vaccination were readily evident. as immunization programs successfully reduced the incidence of vaccine-preventable diseases, however, an increasing proportion of health care providers and parents have little or no personal experience with vaccine-preventable diseases. for their risk-benefit analysis, they are forced to rely on historical and other more distant descriptions of vaccine-preventable diseases in textbooks or educational brochures. in contrast, some degree of personal discomfort, pain, and worry is generally associated with each immunization. in addition, parents searching for information about vaccines on the world wide web are likely to encounter web sites that encourage vaccine refusal or emphasize the dangers of vaccines. 309, 310 similarly, the media may sensationalize vaccine safety issues or, in an effort to present "both sides" of an argument, fail to provide perspective. 311, 312 for reasons discussed earlier, there may be uncertainty if vaccines are associated with rare or delayed adverse reactions if only because the scientific method does not allow for acceptance of the null hypothesis. therefore, one cannot prove that a vaccine never causes a particular adverse event, only that an adverse event is unlikely to occur by a certain statistical probability. the combination of these factors may have an impact on parental beliefs about immunizations. a national survey found that although the majority of parents support immunizations, 20% to 25% have misconceptions that could erode their confidence in vaccines. 182 within this context, the art of addressing vaccine safety concerns through effective risk communication has emerged as an increasingly important skill for managers of mature immunization programs and health care providers who administer vaccines. the science of risk perceptions and risk communications, developed initially for technology and environmental arenas, 313 has only recently been formally applied to immunizations. 314 for scientists and other experts, risk tends to be synonymous with the objective probability of morbidity and mortality resulting from exposure to a particular hazard. 315 in contrast, research has shown that laypersons may have subjective, multidimensional, and value-laden conceptualizations of risk. 316 among the key principles and lessons learned about public perceptions of risk are the following: -individual people differ in their perceptions of risk depending on their personality, education, life experience, and personal values; 317, 318 educational materials tiered for different needs are therefore likely to be more effective than a single tier. -perceptions of risk may differ dramatically among various stakeholders, such as members of government agencies, industry, or activist groups. 319 the level of trust between stakeholders has an impact on all other aspects of risk communication. 320 trust is generally reinforced by open communication about what is known and unknown about risks and by providing candid accounts of the evidence and how it was used in the decisionmaking process. 321 -certain hazard characteristics, including involuntariness, uncertainty, lack of control, high level of dread, and low level of equity, lead to higher perceived risk; 316 only risks with similar characteristics should be compared in risk communication efforts. 322 -for quantitatively equivalent risk that is due to action (eg, vaccination reaction) vs inaction (eg, vaccinepreventable disease caused by nonvaccination), many people prefer the consequences of inaction to action. 323 -when there is uncertainty about risks, patients frequently rely on the advice of their physician or other health care professionals; continuing education of health care professionals on vaccine risk issues is key. 182 -finally, different ways of presenting, or framing, the same risk information (eg, using survival rates vs mortality rates) can lead to different risk perceptions, decisions, and behaviors. 324, 325 risk communication can be used for the purposes of advocacy, public education, or decision-making partnership. 313 people care not only about the magnitude of risks, but also how risks are managed and whether they participate in the risk-management process, especially in a democratic society. 326 in medical decision making, this has resulted in a transition from more paternalistic models to increasing degrees of informed consent. 327 some have argued that a similar transition to informed consent also should occur with immunizations. 328 however, immunization is unlike most other medical procedures (eg, surgery) in that the consequences of the decision affect not only the individual person, but also others in the society. because of this important distinction, many countries have enacted public health (eg, immunization) laws that severely limit an individual person's right to infect others. without such mandates, persons may attempt to avoid the risks of vaccination while being protected by the herd immunity resulting from others being vaccinated. 329 unfortunately, the protection provided by herd immunity may disappear if too many people avoid vaccination, resulting in outbreaks of vaccine-preventable diseases. 330, 331 debates in the united states have focused on whether philosophical (in addition to medical and religious) exemptions to mandatory immunizations should be allowed more universally and, if so, what standards for claim of exemption are needed. 328, 332, 333 thus, vaccine risk communications should not only describe the risks and benefits of vaccines for individual people, but also should include discussion of the impact of individual immunization decisions on the larger community. empathy, patience, scientific curiosity, and substantial resources are needed to address concerns about vaccine safety. although each evaluation of a vaccine safety concern is in some ways unique, some general principles may apply to most cases. as with all investigations, the first step is objective and comprehensive data gathering. 51 it is also important to gather and weigh evidence for causes other than vaccination. for individual cases or clusters of cases, a field investigation to gather data firsthand may be necessary. 134, 334 advice and review from a panel of independent experts also may be needed. 109, 335, 336 causality assessment at the individual level is difficult at best; further evaluation via epidemiologic or laboratory studies may be required. 337 even if the investigation is inconclusive, such studies can often help to maintain public trust in immunization programs. 338 scientific investigations are only the beginning of addressing vaccine safety concerns. in many countries, people who believe they or their children have been injured by vaccines have organized and produced information highlighting the risks of and alternatives to immunizations. from the consumer activist perspective, even if vaccine risks are rare, this low risk does not reassure the person who experiences the reaction. 339 such groups have been increasingly successful in airing their views in electronic and print media, frequently with poignant individual stories. 309, 310 because the media frequently raise controversies without resolution and choose "balance" over perspective, one challenge is to establish credibility and trust with the audience. 340, 341 factors that aid in enhancing credibility include demonstrating scientific expertise, establishing relationships with members of the media, expressing empathy, and distilling scientific facts and figures down to simple lay concepts. however, statistics and facts compete poorly with dramatic pictures and stories of disabled children. emotional reactions to messages are often dominant, influencing subsequent cognitive processing. 342 therefore, equally compelling firsthand accounts of people with vaccine-preventable diseases may be needed to communicate the risks associated with not vaccinating. clarifying the distinction between perceived and real risk for the concerned public is critical. if further research is needed, the degree of uncertainty (eg, whether such rare vaccine reactions exist at all) should be acknowledged, but what is certain also should be noted (eg, millions of people have received vaccine x and have not developed syndrome y; even if the vaccine causes y, it is likely to be of magnitude z, compared with the magnitude of known risks associated with vaccine-preventable diseases). in the united states, written information about the risks and benefits of immunizations developed by the cdc has been required to be provided to all people vaccinated in the public sector since 1978. 343 the national childhood vaccine injury act requires every health care provider, public or private, who administers a vaccine that is covered by the act to provide a copy of the most current cdc vaccine information statement (vis) to the adult vaccinee or, in the case of a minor, to the parent or legal representative each time a dose of vaccine is administered. 344 health care providers must note in each patient's permanent medical record the date printed on the vis and the date the vis was given to the vaccine recipient or his or her legal representative. viss are the cornerstone of provider-patient vaccine risk-benefit communication. each vis contains information on the disease(s) that the vaccine prevents, who should receive the vaccine and when, contraindications, vaccine risks, what to do if a side effect occurs, and where to go for more information. current viss can be obtained from the cdc's national center for immunization and respiratory diseases at www.cdc.gov/vaccines and are available in more than 20 languages from the immunization action coalition at www.immunize.org. an increasing number of resources that address vaccine safety misconceptions and allegations also have become available, including web sites, brochures, resource kits, and videos (table 76-4) . some studies have been conducted to assess the use and effectiveness of such materials; 345-349 however, more research in this area is needed. immunization programs and health care providers should anticipate that some members of the public may have deep concerns about the need for and safety of vaccines. a few may refuse certain vaccines or even reject all vaccinations. an understanding of vaccine risk perceptions and effective vaccine risk communication are essential in responding to misinformation and concerns. toward this end, cdc's vaccine safety website (http://www.cdc. gov/vaccinesafety/index.html) provides basic information on the safety of routinely administered vaccines, as well as responses to frequently asked questions. the website also provides more detailed information on how vaccines are tested and monitored for safety; cdc's specific projects for monitoring, evaluation, and research on vaccine safety (vaers, vsd, and cisa); detailed sections addressing common concerns (eg, autism, thimerosal); and a resource library with articles, fact sheets, and other related materials on immunization safety. parental vaccine acceptance in a new era: the role of health care providers and public health professionals one consequence of the success of vaccines is that an increasing number of parents and clinicians have little or no personal experience with or knowledge of many of the diseases that vaccines prevent. thus, vaccine-preventable diseases often are not perceived as a real threat by parents. 350,351 moreover, increasingly parents want to be fully informed about their children's medical care, 352 thus merely recommending vaccination may not be sufficient. also in this new era, stories in the media highlighting adverse events (real or perceived) may cause some parents to question the safety of vaccines. apart from the media attention on vaccine safety issues, a confluence of factors has an influence on parents' vaccine attitudes in the present environment of a low incidence of vaccinepreventable diseases. these factors would be relatively unimportant in an environment where diseases such as polio and measles were common and people lived in fear of their children contracting disease; however, they have become predominant in the current climate for some parents. some of these factors are: (1) lack of appropriately tailored information about the benefits of vaccines and contrary information from alternative health practitioners, (2) mistrust of the source of the information, (3) perceived serious side effects, (4) not perceiving the risks of vaccines accurately, and (5) insufficient biomedical literacy. addressing these issues is a challenge for medical and public health professionals because the typical arrangement for providing medical care does not allow full reimbursement of health care providers for educating patients and parents. 353 nevertheless, it is important for us to try to meet the challenge because an understanding of the aforementioned factors and a proactive approach to vaccine education may prevent future concerns from escalating into widespread refusal of vaccines, with a consequent increased incidence of vaccine-preventable diseases. most people today want to be thoroughly informed about their health care. 352 the desire for more information also applies to parents with regard to medical issues for their children. parents want to be part of the decision-making process when it comes to immunizations for their children. 354 providing the appropriate information at the appropriate time is especially important now with the increased questioning of vaccines and with 20 states allowing philosophical exemptions in 2011. there is an association between information and vaccine acceptance. a recent study found that while 67% of parents agreed that they had access to enough information to make a good decision about immunizing their children, 33% of parents disagreed or were neutral. 355 parents who disagreed they had enough vaccine information had negative attitudes about immunizations, health care providers, immunization requirements and exemptions, and trust in people responsible for immunization policy. moreover, a larger percentage of parents who reported they did not have access to enough information about vaccines also had several specific vaccine concerns compared with parents who were neutral or agreed they had access to enough information. 355 it may be that when there is a void of accurate, trusted information, doubts about vaccines arise and misinformation is more readily accepted. other studies have demonstrated the effect of providing information on the well-being of patients. for example, information is one factor that has been shown to positively influence a sense of control in patients with rheumatoid arthritis, 356 and perceived lack of information among mothers was one reason contributing to nonimmunization of children in india. 357 by using the principle of audience segmentation (partitioning a population into segments with shared characteristics), a survey study identified five parent groups that varied on health and immunization attitudes and beliefs. 358 the two audience segments identified as most concerned about immunizations ("worrieds" and "fencesitters") were chosen as the focus of a follow-up study to obtain the input of mothers in these segments in the development of evidence-based, tailored educational materials. the purpose of these materials would be to assist health care providers in busy office settings to address questions from these two groups of parents. presentation of these tailored brochures by children's health care providers to parents in an empathetic and respectful manner could aid in improving the health care provider-parent relationship, increasing vaccine acceptance, and ultimately preventing vaccine-preventable diseases. the viss are typically given to parents the day the child is scheduled for immunization. 345, 348, 359 this often places the parent in a conflict situation of attending to the vis or attending to a frightened or upset child. not surprisingly, studies have shown that parents would rather receive the information in advance of the first vaccination visit. 345, [359] [360] [361] suggested earlier times for vaccine education include prenatal clinic visits and just after delivery in a hospital. 362 a national survey indicated that 80% of providers said that a preimmunization booklet for parents would be useful for communicating risks and benefits to parents. 348 the use of complementary and alternative medicine (cam) has been increasing during the past 50 years in the united states. 363 part of this increase is due to mcos providing coverage for some cam therapies. 364 chiropractic care is among the top 10 most commonly used cam therapies. 365 it is of note that some chiropractic colleges teach a negative view of immunizations. 366 in one study, one third of chiropractors agreed that there is no scientific proof that immunizations prevent disease. 366 the basis for the negative views of vaccine effectiveness may lie in the chiropractic doctrine that disease is the result of spinal nerve dysfunction caused by subluxation coupled with the rejection of the germ theory of disease. 366, 367 it may be that some chiropractors who adhere to this belief influence parents against immunizing their children. in one study, parents who requested immunization exemptions for their children were more likely to report cam use in their families than parents who did not request these exemptions. 368 this emphasizes the importance of a trusting physician-patient relationship and providing parents with tailored information in advance of their child's immunizations; in this manner their questions are answered and they are prepared with the facts when they encounter contrary information from other sources. reaching out to chiropractic organizations to foster a better understanding of the benefits of immunizations may be advantageous to medical and public health professionals. parental concern about immunizations has been associated with a lack of trust. for example, one of the factors influencing parents who choose not to vaccinate their children for pertussis is doubt about the reliability of the vaccine information. 369 in another study, compared with parents of vaccinated children, parents of children with an immunization exemption were more likely to express a low level of trust in the government, in addition to other factors such as low perceived susceptibility to and severity of vaccine-preventable diseases and low perceived efficacy and safety. these parents were less likely to believe that medical and public health professionals are good or excellent sources of immunization information. 368 the majority of parents (84%), however, report receiving immunization information from a physician. 182 thus, having a physician who engenders trust providing immunization information and who is available to listen and answer questions is the optimal situation from the public health perspective. if trust in a child's physician is low, parents may be drawn to other, less credible sources of information. when a child experiences an adverse event following receipt of a vaccine, it often raises the question "was this vaccine necessary"? to the parent, it may seem that the risks of the vaccine are greater than the risks of not getting the vaccine. parents who sought medical attention for any of their children owing to an apparent adverse event following immunizations (6.9%) not only expressed more concern about immunizations, but also were more likely to have a child who lacked one or more doses of three high profile vaccines compared with parents who reported that none of their children had experienced an adverse event following immunization. 370 two scenarios were seen as plausible. it may be that parents who were already concerned about vaccines before their child began the vaccination schedule were more reactive and thus sought medical attention for minor side effects (eg, fever) or nonrelated problems. it is also possible that an apparent adverse event following immunization that resulted in parents seeking medical attention for their child caused the parents' perception of vaccines to become more negative. both possibilities may result in parents declining future vaccines for their children. negative attitudes could be addressed by improving communication between clinician and parent. benefit-cost analysis research has shown that physician advice can produce benefits for health issues (eg, problem drinking). 371 moreover, positive communication behaviors such as humor and soliciting questions are associated with lowered physician's risk of a malpractice suit. 372 it may be that in this era of low vaccine-preventable disease incidence and increased public questioning of immunizations, improved provider communication can produce a positive net benefit for parents (reduced anxiety), a cost benefit to the health care system (reduced calls and medical visits for nonserious adverse events following immunization), and an improved physician-patient relationship (more trust and fewer malpractice suits). individual people can vary in their perception of the magnitude of vaccine risks. studies have shown that various factors such as sex, race, political worldviews, emotional affect, and trust are associated with risk perception. 373 in addition, risk perception factors such as involuntariness, uncertainty, lack of control, and high level of dread can lead to a heightened perception of risks. 374 all of these can be seen as associated with childhood immunizations. moreover, these factors have been referred to as "outrage" factors in the risk communication literature. outrage can lead to a person responding emotionally and can increase further the level of perceived risk. 374 it can be difficult to communicate the risk of many vaccinepreventable diseases given their low prevalence in the united states and difficult to communicate the risks of serious vaccine adverse events because they affect such a small proportion of vaccine recipients. 375,376 several factors have been studied that might help people to better understand risk; the first are comparisons. comparisons that are similar (apples to apples) are reported to be better accepted, 377 and, thus, comparisons for vaccines should focus on things that generally prevent harm in children but could pose a small risk (such as bicycle helmets, car seats). the second are visual presentations that help people understand numerical risk, including risk ladders, 378 stick figures, line graphs, dots, pie charts, and histograms. 379 unfortunately, there has been little research done in either of these areas. trust in the source of the risk information is an important factor in its ability to influence people 380 and, as discussed, is developed through listening and ongoing communications. 381 in 1999, american adults had an average score of 51.2 on an index of biomedical literacy designed to measure understanding of biomedical terms and constructs. people with scores less than 50 would likely find it difficult to understand medical stories about why antibiotics are not effective in combating the common cold and the relationship between certain genes and health. 382 the main factors associated with biomedical literacy are the following: (1) level of formal education, (2) number of college-level science courses, and (3) age. some characteristics of scientific literacy include the following abilities: (1) distinguishing experts from the uninformed; (2) recognizing gaps, risks, limits, and probabilities in making decisions involving a knowledge of science or technology; (3) recognizing when a cause-and-effect relationship cannot be drawn; and (4) distinguishing evidence from propaganda, fact from fiction, sense from nonsense, and knowledge from opinion. unfortunately, parental characteristics of those least motivated to obtain timely immunizations for their children are often characterized by low educational level of either parent. 383 there is a wide gap in the level of biomedical understanding across the us population, and this gap emphasizes the need for tailored information. the need for tailored information applies to all areas of health, including childhood immunizations. immunization educational materials aimed at a middle level or a "one size fits all" are not likely to satisfy all parents' needs. 382 the importance of educating parents concerned about vaccines why should we care about a small number of parents who are worried about vaccines for their children? we should care because it is not only ethically the right thing to do, it is also the right thing to do from a practical viewpoint. vaccine acceptability refers to the factors that go into parents' decisions to have their children immunized. it is important not to assume that just because most parents are having their children immunized that they will continue to do so. 384 while the host of factors contributing to parents' decisions to have their children immunized (eg, need for information, experience with adverse events) might remain stable for some time, it is possible that one or more of the factors may change so that some parents perceive the risks of vaccines to be greater than the risk of disease. this would then push the parents above a theoretical "unacceptability threshold" in which they would choose not to have their children immunized with one or more vaccines. this is especially possible as more vaccines are added to the immunization schedule. an increasing number of parents have a choice, through religious or philosophical immunization exemption laws or schooling their children at home. 385 averting the future possibility of outbreaks of vaccine-preventable diseases will take a concerted effort by health care and public health professionals to educate and better communicate with parents concerned about immunizations. in guidance for clinicians, the american academy of pediatrics suggests that pediatricians should listen carefully and respectfully to parents' immunization concerns, factually communicate the risks and benefits of vaccines, and work with parents who may be concerned about a specific vaccine or having their child receive multiple vaccines in one visit. 386 providers can make a huge impact on vaccine acceptance, resulting in a cascading effect in which providing information can increase trust and increasing trust can lead to greater acceptance of and confidence in vaccines. for health care providers to be able to optimally fill this important role, however, two related issues should be addressed. the first is the need for quality communication courses and training in medical schools and residencies and training programs for medical and public health professionals. 387,388 the second is for mcos and medical insurance companies to adequately reimburse physicians for health education. lack of reimbursement to physicians has been noted as a barrier to implementation of behavioral treatments for health issues such as heart disease 353 and smoking. 389 it is important to note that studies have shown education programs can be a cost savings to health care systems. 390, 391 we live in a world already benefiting from vaccines that exist, and there is the promise of more vaccines to come. the challenge we have now is to make sure that the promise is not lost because we did not present the benefits and risks of vaccines in a meaningful way acceptable to the public. an optimal immunization safety system requires rigorous attention to safety during prelicensure research and development; active monitoring for potential safety problems after licensure; and clinical research and risk management activities, including risk communication, focused on minimizing potential vaccine adverse reactions. prelicensure activities form the foundation of vaccine safety. rapid advances in biotechnology are leading to the development of new vaccines, 392 and novel delivery technologies, such as dna vaccines and new adjuvants, are being developed to permit more antigens to be combined, reducing the number of injections. 142,393 new technologies can also be expected to be used to detect potential safety problems throughout the research and development process (eg, adventitial agents). a challenge will be determining the proper role and interpretation of new technologies. for example, a recent study used powerful new metagenomics and panmicrobial microarray technologies to screen for adventitious viral nucleic acid sequences in a number of vaccines. 394 the study identified the presence of dna from porcine circovirus type 1 (pcv1) in rotarix. this finding led to a temporary suspension of the use of the vaccine while the fda evaluated the study and its implications. ultimately, it was determined that the presence of the pcv1 nucleic acid sequences did not represent a health concern, and use of the vaccine was allowed to resume. 395 in the prelicensure evaluation of new vaccines, the trend is likely to continue to conduct larger phase 3 trials enrolling tens of thousands of participants. while such larger trials are helpful in identifying more rare adverse events, even these larger trials may not be large enough to detect increased risks of rare events. for example, the rotarix preclinical trial identified no increased risk of intussusception in a study that enrolled more than 60,000 infants. 49 the manufacturer nevertheless committed to conduct a large postlicensure safety monitoring study. a preliminary analysis of postlicensure monitoring data from mexico identified a statistically significant increased risk within 30 days of vaccination with an attributable risk of approximately 1 per 100,000. 396 the attributable risk was much less than that found for rotashield (approximately 1 per 10,000), and no changes were made to the vaccine recommendations. although technological advances and more thorough evaluation of safety before vaccines are licensed should lead to the development of safer vaccines, there will continue to be a need for comprehensive postlicensure safety monitoring systems. combined with the difficulties associated with identifying rare, delayed, or insidious vaccine safety problems in prelicensure studies, 43 the well-organized consumer activist organizations, 339 internet information of questionable accuracy, 309, 310 media eagerness for controversy, 311, 340 and relatively rare individual encounters with vaccine-preventable diseases virtually ensure that vaccine safety concerns are unlikely to go away. the existence of a robust vaccine safety monitoring system is essential for providing assurance of the safety of currently marketed vaccines and for rapidly identifying and responding to potential safety problems. currently, srss, such as vaers, serve as the frontline systems for the early identification of vaccine safety problems. such systems could be improved if reporting were more complete. application of web-based and text messaging technologies could make reporting easier and more accurate and also enable more active follow-up of vaccinated persons. alerts built into electronic medical record systems could also improve reporting to vaers, as could linkages with immunization registries. some of these advances will be particularly important to enable monitoring vaccine safety in mass vaccination campaigns during which vaccinations may be administered primarily outside of the traditional health care system. an optimal vaccine safety monitoring system must also include a mechanism or infrastructure to rapidly conduct formal epidemiologic evaluations of potential safety problems identified from srss or other sources. in the united states, this function is primarily served by the vsd project. the diffusion of electronic health records and the capability to link records across data systems (such as large health insurance claims databases and immunization registries) may allow the expansion of the population that could be included in postlicensure epidemiologic evaluations of vaccine safety. for example, the fda sentinel initiative has a goal to develop a national electronic system covering 100 million people for monitoring the postmarket safety of drugs and other medical products, including vaccines. 397 for adverse reactions that are established to be caused by vaccines, clinical and laboratory research is essential for determining the biological mechanisms of the adverse reaction, which in turn could lead to the development of safer vaccines. clinical research is also essential for the development of protocols for safer vaccination, including revaccination of persons who have previously experienced an adverse reaction. advances in genomics and immunology hold particular promise for elucidating biological mechanisms of vaccine adverse reactions and the development of possible screening strategies for persons who may be at high risk for an adverse reaction. a challenge for such research will be identifying sufficient numbers of people who may have rare vaccine adverse reactions and enrolling them into studies in which appropriate biological samples can be collected, stored, and analyzed under a standardized protocol. scientific data are essential in the monitoring and evaluation of vaccine safety, but scientific evidence alone often is not sufficient for providing reassurance about the safety of a vaccine. although immunization levels of us children are high, a sizable fraction of parents do not have their children fully immunized, and concern about vaccine safety is the leading reason for underimmunization. these concerns persist despite the scientific evidence that vaccines do not cause autism or a host of other conditions that have been alleged to be caused by vaccines, such as asthma, diabetes, and autoimmune diseases. thus, it is critically important that public health agencies, medical organizations, and other influential authorities continue to focus on the safety of vaccines and assure public confidence by providing clear, consistent messages on vaccine safety concerns; supporting effective and transparent vaccine safety monitoring systems and research activities; providing review and recommendations by respected independent expert groups on vaccine safety controversies; and engaging advocacy groups in constructive and open dialogue about their vaccine safety concerns. although the efforts of government, medical, and other authorities are important, it is health care providers who have the greatest influence in determining the acceptance of vaccines by individual people. even among parents who believe that vaccines may not be safe, most will have their children vaccinated if they have a trusting relationship with an influential health care provider. thus, development of tools and strategies that can assist health care providers in effectively communicating with their patients on the risks and benefits of vaccines will continue to be important. vaccine safety has also become an important concern in developing countries. 398 the high-titer measles vaccine mortality experience highlighted the importance of improving the quality control and evaluating the safety of vaccines used in developing countries. 112 plans to eliminate neonatal tetanus and measles via national immunization days, during which millions of people receive parenteral immunizations over a period of days, 399 pose substantial challenges to ensuring injection safety, 400 especially given concerns about inadequate sterilization of reusable syringes and needles, recycling of disposable syringes and needles, and cross-contamination resulting from the current generation of jet injectors. 401 the who has promoted the use of safer auto-disposable syringes and disposal boxes. 402 these and other new, safer administration technologies are urgently needed. 403 in addition, there is a need to establish minimal vaccine safety monitoring capabilities, such as srss, and the capability to rapidly investigate vaccine safety problems and effectively communicate the findings of the investigations. vaccines are among the most successful and cost-effective public health tools for preventing disease and death. vaccines, however, are not completely without risk of side effects or other adverse outcomes. a timely, credible, and effective monitoring system, coupled with prompt action in response to identified safety problems, is essential to preventing adverse effects of vaccination and to maintaining public confidence in immunizations. since immunizations are typically administered to healthy people and are often recommended or mandated to provide societal and individual protection, vaccines must be held to a very high standard of safety. vaccine safety monitoring and research should optimally be able to detect potentially very small levels of increased risk, especially for adverse events that can result in death or permanent disability from vaccines that are universally recommended or mandated. the ultimate goal of such research, including the application of new developments in biotechnology, is to develop safer vaccines and vaccination practices. access the complete reference deadly choices: how the anti-vaccine movement threatens us all ensuring the optimal safety of licensed vaccines: a prospective of the vaccine research, development, and manufacturing companies active surveillance for adverse events: the experience of the vaccine safety datalink project understanding the role of human variation in vaccine adverse events: the clinical immunization safety assessment network addressing parents' concerns: do multiple vaccines overwhelm or weaken the infant's immune system? addressing parents' concerns: do vaccines cause allergic or autoimmune diseases? measles-mumps-rubella vaccine and autism thimerosal in vaccines: a joint statement of the american academy of pediatrics and the public health service autism's false prophets: bad science, risky medicine, and the search for a cure addressing parents' concerns: do vaccines contain harmful preservatives, adjuvants, or residuals we are grateful to robert davis, deborah gust, robert chen, and charles hackett who contributed sections of this chapter in previous editions of this book and for the excellent assistance on this chapter rendered by the following persons: dan salmon, john iskander, susan scheinman, christine korhonen, allison kennedy, michele russell, tamara murphy, penina haber, and gina mootrey. key: cord-030116-ucmzbezx authors: hardell, lennart; carlberg, michael title: health risks from radiofrequency radiation, including 5g, should be assessed by experts with no conflicts of interest date: 2020-07-15 journal: oncol lett doi: 10.3892/ol.2020.11876 sha: doc_id: 30116 cord_uid: ucmzbezx the fifth generation, 5g, of radiofrequency (rf) radiation is about to be implemented globally without investigating the risks to human health and the environment. this has created debate among concerned individuals in numerous countries. in an appeal to the european union (eu) in september 2017, currently endorsed by >390 scientists and medical doctors, a moratorium on 5g deployment was requested until proper scientific evaluation of potential negative consequences has been conducted. this request has not been acknowledged by the eu. the evaluation of rf radiation health risks from 5g technology is ignored in a report by a government expert group in switzerland and a recent publication from the international commission on non-ionizing radiation protection. conflicts of interest and ties to the industry seem to have contributed to the biased reports. the lack of proper unbiased risk evaluation of the 5g technology places populations at risk. furthermore, there seems to be a cartel of individuals monopolizing evaluation committees, thus reinforcing the no-risk paradigm. we believe that this activity should qualify as scientific misconduct. most politicians and other decision-makers using guidelines for exposure to radiofrequency (rf) radiation seem to ignore the risks to human health and the environment. the fact that the international agency for research on cancer (iarc) at the world health organization (who) in may 2011 classified rf radiation in the frequency range of 30 khz to 300 ghz to be a 'possible' human carcinogen, group 2b (1, 2) , is being ignored. this has been recently exemplified in a hearing at the tallinn parliament in estonia (3) . an important factor may be the influence on politicians by individuals and organizations with inborn conflicts of interests (cois) and their own agenda in supporting the no-risk paradigm (4, 5) . the international commission on non-ionizing radiation protection (icnirp) has repeatedly ignored scientific evidence on adverse effects of rf radiation to humans and the environment. their guidelines for exposure are based solely on the thermal (heating) paradigm and were first published in icnirp 1998 (6) , updated in icnirp 2009 (7) and have now been newly published in icnirp 2020 (8) , with no change of concept, only relying on thermal effects from rf radiation on humans. the large amount of peer-reviewed science on non-thermal effects has been ignored in all icnirp evaluations (9, 10) . additionally, icnirp has successfully maintained their obsolete guidelines worldwide. cois can be detrimental, and it is necessary to be as unbiased as possible when assessing health risks. there are three points that should be emphasized. firstly, the evidence regarding health risks from environmental factors may not be unambiguous, and therefore informed judgements must be made. furthermore, there are gaps in knowledge that call for experienced evaluations, and no conclusion can be reached without value judgements. secondly, paradigms are defended against the evidence and against external assessments by social networks in the scientific community. thirdly, the stronger the impact of decisions about health risks on economic, military and political interests, the stronger will stakeholders try to influence these decision processes. since the iarc evaluation in 2011 (1, 2) , the evidence on human cancer risks from rf radiation has been strengthened based on human cancer epidemiology reports (9) (10) (11) , animal carcinogenicity studies (12) (13) (14) and experimental findings on oxidative mechanisms (15) and genotoxicity (16) . therefore, the iarc category should be upgraded from group 2b to group 1, a human carcinogen (17) . the deployment of the fifth generation, 5g, of rf radiation is a major concern in numerous countries, with groups of citizens trying to implement a moratorium until thorough research on adverse effects on human health and the environment has been performed. an appeal for a moratorium, currently signed by >390 international scientists and medical doctors, was sent to the european union (eu) in september 2017 (18) , currently with no eu response (19) . several regions have implemented a moratorium on the deployment of 5g motivated by the lack of studies on health effects, for instance geneva (20) . in the present article, the current situation in switzerland is discussed as an example (21) . additionally, the icnirp 2020 evaluation is discussed (8) . several swiss citizens have brought to our attention that associate professor martin röösli is the chair of two important government expert groups in switzerland (directeur), despite possible cois and a history of misrepresentation of science (22, 23) . these groups are beratende expertengruppe nis (berenis; the swiss advisory expert group on electromagnetic fields and non-ionizing radiation) (24) , and the subgroup 3, the mobile communications and radiation working group of the department of the environment, transport, energy and communications/eidgenössisches departement für umwelt, verkehr, energie und kommunikation, evaluating rf-radiation health risks from 5g technology (25, 26) . the conclusions made in the recent swiss government 5g report are biased and can be found here (27, 28) . this 5g report concluded that there is an absence of short-term health impacts and an absence or insufficient evidence of long-term effects [see table 17 (tableau 17) on page 69 in the french version (27) and table 17 (tabelle 17) on page 67 in the german version (28) ]. furthermore, it was reported that there is limited evidence for glioma, neurilemmoma (schwannoma) and co-carcinogenic effects, and insufficient evidence for effects on children from prenatal exposure or from their own mobile phone use. regarding cognitive effects, fetal development and fertility (sperm quality), the judgement was that the evidence on harmful effects is insufficient. these evaluations were strikingly similar to those of the icnirp (see appendix b in icnirp 2020; 8). other important endpoints, such as effects on blood-brain barrier, cell proliferation, apoptosis (programmed cell death), oxidative stress (reactive oxygen species) and gene and protein expression, were not evaluated. according this swiss evaluation is scientifically inaccurate and is in opposition to the opinion of numerous scientists in this field (18) . in addition, 252 electromagnetic field (emf) scientists from 43 countries, all with published peer-reviewed research on the biologic and health effects of nonionizing electromagnetic fields (rf-emf) have stated that: 'numerous recent scientific publications have shown that rf-emf affects living organisms at levels well below most international and national guidelines. effects include increased cancer risk, cellular stress, increase in harmful free radicals, genetic damages, structural and functional changes of the reproductive system, learning and memory deficits, neurological disorders, and negative impacts on general wellbeing in humans. damage goes well beyond the human race, as there is growing evidence of harmful effects to both plant and animal life' (30) . we are concerned that the swiss 5g report may be influenced by ties to mobile phone companies (cois) by one or several members of the evaluating group. funding from telecom companies is an obvious coi. martin röösli has been a member of the board of the telecom funded swiss research foundation for electricity and mobile communication (fsm) organization and he has received funding from the same organization (31) (32) (33) . it should be noted that the fsm is a foundation that serves formally as an intermediate between industry and researchers. according to their website, among the five founders of fsm who 'provided the initial capital of the foundation' four are telecommunications companies: swisscom, salt, sunrise, 3g mobile (liquidated in 2011). the fifth founder is eth zurich (technology and engineering university). there are only two sponsors, swisscom (telecommunications) and swissgrid (energy), who 'support the fsm with annual donations that allow for both the management of the foundation and research funding' (34) . the same situation applies to being a member of icnirp (table i) (35) . in 2008, the ethical council at karolinska institute in stockholm stated that being a member of icnirp is a potential coi. such membership should always be declared. this verdict was based on activities by anders ahlbom in sweden, at that time a member of icnirp, but is a general statement (2008-09-09; dnr, 3753-2008-609). in summary: 'it is required that all parties clearly declare ties and other circumstances that may influence statements, so that decision makers and the public may be able to make solid conclusions and interpretations. aa [anders ahlbom] should thus declare his tie to icnirp whenever he makes statements on behalf of authorities and in other circumstances' (translated into english). cois with links to industry are of great importance; these links may be direct or indirect funding for research, payment of travel expenses, participation in conferences and meetings, presentation of research, etc. such circumstances are not always declared as exemplified above. a detailed description was recently presented for icnirp members (22) . icnirp is a non-governmental organization (ngo) based in germany. members are selected via an internal process, and the organization lacks transparency and does not represent the opinion of the majority of the scientific community involved in research on health effects from rf radiation. independent international emf scientists in this research area have declared that: 'in 2009, the icnirp released a statement saying that it was reaffirming its 1998 guidelines, as in their opinion, the scientific literature published since that time has provided no evidence of any adverse effects below the basic restrictions and does not necessitate an immediate revision of its guidance on limiting exposure to high frequency electromagnetic fields. icnirp continues to the present day to make these assertions, in spite of growing scientific evidence to the contrary. it is our opinion that, because the icnirp guidelines do not cover long-term exposure and low-intensity effects, they are insufficient to protect public health' (30) . icnirp only acknowledges thermal effects from rf radiation. therefore, the large body of research on detrimental non-thermal effects is ignored. this was further discussed in a peer-reviewed scientific comment article (3) . in 2018, icnirp published 'icnirp note: critical evaluation of two radiofrequency electromagnetic field animal carcinogenicity studies published in 2018' (36). it is surprising that this note claims that the histopathological evaluation in the us national toxicology program (ntp) study on animals exposed to rf radiation was not blinded (12, 13) . in fact, unfounded critique of the ntp study had already been rebutted (37); however, this seems to have had little or no impact on this icnirp note casting doubt on the findings of the animal study: 'this commentary addresses several unfounded criticisms about the design and results of the ntp study that have been promoted to minimize the utility of the experimental data on rfr [radiofrequency radiation] for assessing human health risks. in contrast to those criticisms, an expert peerreview panel recently concluded that the ntp studies were well designed, and that the results demonstrated that both gsm-and cdma-modulated rfr were carcinogenic to the heart (schwannomas) and brain (gliomas) of male rats' (37) . in contrast to the opinion of the 13 icnirp commission members, the iarc advisory group of 29 scientists from 18 countries has recently stated that the cancer bioassay in experimental animals and mechanistic evidence warrants high priority re-evaluation of the rf radiation-induced carcinogenesis (38) . (39) . surprisingly, the iarc classification of rf-emf exposure as group 2b ('possibly' carcinogenic to humans) from 2011 was concealed in the background material to the new icnirp draft on guidelines. notably, one of the icnirp commission members, martin röösli (40), was also one of the iarc experts evaluating the scientific rf carcinogenicity in may 2011 (41) . he should be well aware of the iarc classification. the iarc classification contradicts the scientific basis for the icnirp guidelines, making novel guidelines necessary and providing a basis to halt the rollout of 5g technology. therefore, the icnirp provides scientifically inaccurate reviews for various governments. one issue is that only thermal (heating) effects from rf radiation are considered, and all non-thermal effects are dismissed. an analysis from the uk demonstrates these inaccuracies (4), also discussed in another article (5) . all members of the icnirp commission are responsible for these biased statements that are not based on solid scientific evidence. icnirp release of novel guidelines for rf radiation. on march 11, 2020, icnirp published their novel guidelines for exposure to emfs in the range of 100 khz to 300 ghz, thus including 5g (8). the experimental studies demonstrating a variety of non-thermal biological/health effects (9,10) are not considered, as in their previous guidelines (6, 7) . additionally, the icnirp increased the reference levels for the general public averaged over 6 min for rf frequencies >2-6 ghz (those that will be used for 5g in this frequency range), from 10 w/m 2 (tables 5 and 7 in ref. no. 6) to 40 w/m 2 ( table 6 in ref. no. 8), which paves the way for even higher exposure levels to 5g than the already extremely high ones. background dosimetry is discussed in appendix a of the icnirp 2020 guidelines (8) . the discussion on 'relevant biophysical mechanisms' should be criticized. the only mechanism considered by icnirp is temperature rise, which may also occur with 5g exposure, apart from the established non-thermal biological/health effects (42, 43) . it is well known among experts in the emf-bioeffects field that the recorded cellular effects, such as dna damage, protein damage, chromosome damage and reproductive declines, and the vast majority of biological/health effects are not accompanied by any significant temperature rise in tissues (44) (45) (46) (47) . the ion forced-oscillation mechanism (48) should be referred to as a plausible non-thermal mechanism of irregular gating of electrosensitive ion channels on cell membranes, resulting in disruption of the cell electrochemical balance and initiating free radical release and oxidative stress in the cells, which in turn causes genetic damage (15, 49) . the irregular gating of ion channels on cell membranes is associated with changes in permeability of the cell membranes, which icnirp admits in its summary (8) . health risks are discussed in appendix b of the icnirp 2020 guidelines (8) . again, only thermal effects are considered, whereas literature on non-thermal health consequences is disregarded (9, 10, 50) . in spite of public consultations on the draft, the final published version on health effects is virtually identical to the draft version, and comments seem to have been neglected (19) . in the following section, appendix b on health effects (8) (scenihr 2015) , and the swedish radiation safety authority (ssm) have produced several international reports regarding this issue (ssm 2015 (ssm , 2016 (ssm , 2018 . accordingly, the present guidelines have used these literature reviews as the basis for the health risk assessment associated with exposure to radiofrequency emfs rather than providing another review of the individual studies'. in the last 11 years since its previous icnirp 2009 statement (7), icnirp has not managed to conduct a novel evaluation of health effects from rf radiation. however, as shown in table i , several of the present icnirp members are also members of other committees, such as the eu scientific committee on emerging and newly identified health risks (scenihr), the swedish radiation safety authority (ssm) and the who, thus creating a cartel of individuals known to propagate the icnirp paradigm on rf radiation (4, 5, 22, 51) . in fact, six of the seven expert members of the who, including emelie van deventer, were also included in icnirp (5, 7) . therefore, emilie van deventer, the team leader of the radiation programme at who (the international emf project), is an observer on the main icnirp commission, and ssm seems to be influenced by icnirp. among the current seven external experts (danker-hopfe, dasenbrock, huss, harbo polusen, van rongen, röösli and scarfi), five are also members of icnirp, and van deventer used to be part of ssm. as discussed elsewhere (5), it is unlikely that a person's evaluation of health risks associated with exposure to rf radiation would differ depending on what group the person belongs to. therefore, by selecting group members, the final outcome of the evaluation may already be predicted (no-risk paradigm). additionally, we believe that this may compromise sound scientific code of conduct. the scenihr report from 2015 (52) has been used to legitimate the further expansion of the wireless technology and has been the basis for its deployment in a number of countries. one method, applied in the scenihr report, to dismiss cancer risks involves the selective inclusion of studies, excluding studies reporting cancer risks and including some investigations with inferior epidemiological quality. the report has been heavily criticized by researchers with no coi (53) regarding the ssm, only yearly updates are available and no overall evaluations are made. therefore, no thorough review is presented. over the years, the icnirp has dominated this committee (table i) . therefore, it is unlikely that the opinion of the ssm will differ from that of the icnirp. in 2014, the who launched a draft of a monograph on rf fields and health for public comments (54) . it should be noted that the who issued the following statement: 'this is a draft document for public consultation. please do not quote or cite'. icnirp completely ignored that request and used the aforementioned document. the public consultations on the draft document were dismissed and never published. in addition to van deventer, five of the six members (mann, feychting, oftedal, van rongen, and scarfi) of the core group in charge of the who draft were also affiliated with icnirp, which constitutes a coi (table i) . scarfi is a former member of icnirp (5) . several individuals and groups sent critical comments to the who on the numerous shortcomings in the draft of the monograph on rf radiation. in general, the who never responded to these comments and it is unclear to what extent, if any, they were even considered. nevertheless, the final version of the who 'in-depth review' has never been published. the authors of the present article were part of a team that applied to review sr1-human cancer. on december 20, 2019, the following reply was received from the who radiation programme: 'after careful review, we have decided to choose another team for this systematic review'. transparency is of importance for the whole process. therefore, a query was sent to the who requesting informa-tion regarding the following points: 'who did the evaluation of the groups that answered the call? what criteria were applied? how many groups had submitted and who were these? which groups were finally chosen for the different packages?'. in spite of sending the request four times, january 2, january 3, april 7 and april 30, 2020, there has been no reply from who. this appears to be a secret process behind closed doors. these circumstances have also been reported in microwave news (55) . it is important to comment on the current icnirp evaluation. notably, on february 27, 2020, two weeks before the icnirp publication, the who team on public health, environmental and social determinants of health issued a statement on 5g mobile networks and health: 'to date, and after much research performed, no adverse health effect has been causally linked with exposure to wireless technologies' (56) . this statement is not correct based on current knowledge (4, 5, (9) (10) (11) 17, 19) and was without a personal signature. the lack of research on 5g safety has been previously discussed (19) . furthermore, there is no evidence that can 'causally link' an adverse effect to an exposure. causality is no empirical fact, it is an interpretation. in the following section, only one (cancer) of the eight different end points in the icnirp publication (8) is discussed, since it deals with our main research area. viii) cancer. 'in summary, no effects of radiofrequency emfs on the induction or development of cancer have been substantiated. the only substantiated adverse health effects caused by exposure to radiofrequency emfs are nerve stimulation, changes in the permeability of cell membranes, and effects due to temperature elevation. there is no evidence of adverse health effects at exposure levels below the restriction levels in the icnirp (1998) guidelines and no evidence of an interaction mechanism that would predict that adverse health effects could occur due to radiofrequency emf exposure below those restriction levels'. the icnirp draft (39) has been previously described to some extent (19) . the published final version on health effects is virtually similar to the draft. it cannot be taken at face value as scientific evidence of no risk from rf radiation. one example is the following statement (p. 41): '…a set of case-control studies from the hardell group in sweden report significantly increased risks of both acoustic neuroma and malignant brain tumors already after less than five years since the start of mobile phone use, and at quite low levels of cumulative call time'. this allegation is not correct according to our publication for glioma (11) . in the shortest latency group >1-5 years, the risk of glioma was not increased (odds ratio (or), 1.1; 95% ci, 0.9-1.4) for use of wireless phones (mobile phone and/or cordless phone). there was a statistically significant increased risk of glioma per 100 h of cumulative use (or, 1.011; 95% ci, 1.008-1.014) and per year of latency (or, 1.032; 95% ci, 1.019-1.046) (11) . these published results are in contrast to the icnirp claims. regarding acoustic neuroma, the corresponding detailed results are reported in our previous study (57) . the shortest latency period >1-5 years yielded an or of 1.2 (95% ci, 0.8-1.6) for use of wireless phones; the risk increased per 100 h of cumulative use (or, 1.008; 95% ci, 1.002-1.014) and per year of latency (or, 1.056; 95% ci, 1.029-1.085) (57) . therefore, the allegation by icnirp is false. it is remarkable that icnirp is uninformed and that their writing is based on a misunderstanding of the peer-reviewed published articles as exemplified above. additionally, our studies (11, 57) and another study by coureau et al (58) , as well as the iarc evaluation from 2011 (1,2), are not included among the references. several statements by icnirp are made without any scientific references. on the other hand, the danish cohort study on mobile phone use (59) is included, in spite of the fact that it was judged by iarc (1,2), as well as in our review (60) , to be uninformative. a biased article written by authors including icnirp members, used to 'prove' the no-risk paradigm for rf radiation carcinogenesis (23) , is cited by icnirp. notably, the article has not undergone relevant peer-review and we believe that it should not have been published in its current version. the shortcomings in the aforementioned article are discussed in the following sections. as discussed below, another claim (23) is incorrect regarding increased risk of brain tumors associated with use of wireless phones: 'however, they are not consistent with trends in brain cancer incidence rates from a large number of countries or regions, which have not found any increase in the incidence since mobile phones were introduced'. the criticism of the icnirp draft guidelines from 2018 by the emf call (61) can also be applied to the current icnirp publication. the call has been signed by 164 scientists and medical doctors, as well as 95 ngos: 'the international commission on non-ionizing radiation protection (icnirp) issued draft guidelines on 11th july 2018 for limiting exposure to electric, magnetic and electromagnetic fields (100 khz to 300 ghz).1 these guidelines are unscientific, obsolete and do not represent an objective evaluation of the available science on effects from this form of radiation. they ignore the vast amount of scientific findings that clearly and convincingly show harmful effects at intensities well below icnirp guidelines. 2 we ask the united nations, the world health organization, and all governments to support the development and consideration of medical guidelines16, that are independent of conflict of interests in terms of direct or indirect ties to industry, that represent the state of medical science, and that are truly protective'. in the recent report on icnirp published by two members of the european parliament it is concluded: 'that is the most important conclusion of this report: for really independent scientific advice we cannot rely on icnirp. the european commission and national governments, from countries like germany, should stop funding icnirp. it is high time that the european commission creates a new, public and fully independent advisory council on non-ionizing radiation' (22) . published article. this section discusses an article with conclusions not substantiated by scientific evidence, representing a biased evaluation of cancer risks from mobile phone use and is an example of lack of objectivity and impartiality (23). the aforementioned report was used by icnirp 2020 (8) to validate that no risks have been found for brain and head tumors. therefore, the article should be discussed in further detail. the aforementioned article has numerous severe scientific deficiencies. one is that the results on use of cordless phones as a risk factor for brain tumors are not discussed. in fact, detailed results on cordless phones in studies by hardell et al (11, 57) are omitted. when discussing glioma risk, all results on cumulative use of mobile phones, as well as ipsilateral or contralateral use associated with tumor localization in the brain, are omitted from the figures in the main text. some results in the article by röösli et al (23) , such as cumulative use, can be found in the supplementary material, although the increased risk among heavy users is disregarded (11, 57, 58, 62) . in supplementary figure 4 , all odds ratios regarding long-term (≥10 years) use of mobile phones are above unity (>1.0) for glioma and neuroma (23) . no results are provided for ipsilateral mobile phone use (same side of tumor localization and mobile phone use), which is of large biological importance. results on cumulative use, latency and ipsilateral use are especially important for risk assessment and have shown a consistent pattern of increased risk for brain and head tumors (11, 57) . in the aforementioned article, recall bias is discussed as the reason for increased risk (23) . the studies by hardell et al (11, 57) included all types of brain tumors. in one analysis, meningioma cases in the same study were used as the 'control' entity (11) , and still a statistically significant increased risk of glioma was identified for mobile phone use (ipsilateral or, 1.4; 95% ci, 1.1-1.8; contralateral or, 1.0; 95% ci, 0.7-1.4) and for cordless phone use (ipsilateral or, 1.4; 95% ci, 1.1-1.9; contralateral or, 1.1; 95% ci, 0.8-1.6). if the results were 'explained' by recall bias, similar results would have been obtained for both glioma and meningioma. thus, this type of analyses would not have yielded an increased glioma risk. also, for acoustic neuroma a statistically significant increased risk was found using meningioma cases as 'controls' (57) . therefore, the results in the studies by hardell et al (11, 57) cannot be explained by a systematic difference in assessment of exposure between cases and controls. these important methodological findings were disregarded by röösli et al (23) . in the analyses of long-term use of mobile phones, a danish cohort study on mobile phone use is included (59) , which was concluded to be uninformative in the 2011 iarc evaluation (1, 2) . a methodological shortcoming of the aforementioned study was that only private mobile phone subscribers in denmark between 1982 and 1995 were included in the exposure group (59) . the most exposed group, comprising 200,507 corporate users of mobile phones, were excluded and instead included in the unexposed control group consisting of the rest of the danish population. users with mobile phone subscription after 1995 were not included in the exposed group and were thus treated as unexposed at the time of cut-off of the follow up. no analysis of laterality of mobile phone use in relation to tumor localization was performed. notably, this cohort study is now included in the risk calculations, although martin röösli was a member of the iarc evaluation group and should have been aware of the iarc decision. the numerous shortcomings in the danish cohort study, discussed in detail in a peer-reviewed article (60) , are omitted in the article by röösli et al (23) . regarding animal studies, a study by falcioni et al (14) at the ramazzini institute on rf radiation carcinogenesis is only mentioned as a reference, but the results are not discussed. in fact, these findings (14) provide supportive evidence on the risk found in human epidemiology studies (3), as well as the results in the ntp study (12, 13) . furthermore, for incidence studies on brain tumors, the results are not presented in an adequate way. there is a lot of emphasis on the swedish cancer register data (63, 64) , but the numerous shortcomings in the reporting of brain tumor cases to the register are not discussed. these shortcomings have been presented in detail in a previous study (63) , but are disregarded by röösli et al (23) . there is clear evidence from several countries regarding increasing numbers of patients with brain tumors, such as in sweden (63, 64) , england (65), denmark (66) and france (67) . the article by röösli et al (23) , does not represent an objective scientific evaluation of brain and head tumor risk associated with the use of wireless phones, and should thus be disregarded. by omitting results of biological relevance and including studies that have been judged to be uninformative, the authors come to the conclusion that there are no risks: 'in summary, current evidence from all available studies including in vitro, in vivo, and epidemiological studies does not indicate an association between mp [mobile phone] use and tumors developing from the most exposed organs and tissues'. röösli et al (23) , disregard the concordance of increased cancer risk in human epidemiology studies (11, 57, 58, 62) animal studies (12) (13) (14) 68, 69) and laboratory studies (15, 16, 37) . it is unfortunate that the review process of the aforementioned article has not been of adequate quality. finally, there is no statement in the article of specific funding of this particular work, which is not acceptable. only a limited number of comments on general funding are provided. it is not plausible that there was no funding for the study. we believe that, due to its numerous limitations, the aforementioned article should not have been published. cefalo. in 2011, a case-control study on mobile phone use and brain tumor risk among children and adolescents termed cefalo was published (70) . the study appears to have been designed to misrepresent the true risk, since the following question regarding cordless phone use was asked: 'how often did [child] speak on the cordless phone in the first 3 years he/she used it regularly?'. there are no scientific valid reasons to limit the investigation to the first 3 years. the result is a misrepresentation and a wrong exposure classification, since aydin et al (70) willingly omitted any increase in the child's use of and exposure from cordless phone radiation after the first 3 years of use. this unscientific treatment of cordless phone exposure was not mentioned in the article other than in a footnote of a table and in the methods section (70) ; however, no explanation was provided: 'specifically, we analyzed whether subjects ever used baby monitors near the head, ever used cordless phones, and the cumulative duration and number of calls with cordless phones in the first 3 years of use'. since previous studies have demonstrated that these phone types, in addition to mobile phones, increase brain tumor risk (11, 57) , we believe that the exclusion of a complete exposure history on the use of cordless phones represents scientific misconduct. in a critical comment the authors of the present study wrote: 'further support of a true association was found in the results based on operator-recorded use for 62 cases and 101 controls, which for time since first subscription >2.8 years yielded or 2.15 (95% ci 1.07-4.29) with a statistically significant trend (p = 0.001). the results based on such records would be judged to be more objective than face-to-face interviews, as in the study that clearly disclosed to the interviewer who was a case or a control. the authors disregarded these results on the grounds that there was no significant trend for operator data for the other variables -cumulative duration of subscriptions, cumulative duration of calls and cumulative number of calls. however, the statistical power in all the latter groups was lower since data was missing for about half of the cases and controls with operator-recorded use, which could very well explain the difference in the results' (71) . our conclusion was that: 'we consider that the data contain several indications of increased risk, despite low exposure, short latency period, and limitations in the study design, analyses and interpretation. the information certainly cannot be used as reassuring evidence against an association, for reasons that we discuss in this commentary' (71) . this is in contrast to the authors that claimed that the study was reassuring of no risk in a press release from martin röösli, july (73) . considering the results and the numerous scientific shortcomings in the study (70) , the statements in these press releases are not correct. there is no doubt that several individuals included in table i are influential, being members, as well as having consulting assignments, in several organizations, such as icnirp, berenis, the ssm, the program electromagnetic fields and health from zonmw in the netherlands, and the rapid response group for the japan emf information center (74) . in fact, there appears to be a cartel of individuals working on this issue (75) . associate professor martin röösli has had the chance to provide his view on the content of the present article relating to him. the only message from him was in an e-mail dated january 16, 2020: 'just to be clear, all my research is funded by public money or not-for -profit fundations [foundations] . i think you will not help an important debate if you spread fake news'. obviously, as described in the present article, his comment is not correct considering his funding from the telecom industry (76, 77) . as shown in table i , few individuals, and mostly the same ones, are involved in different evaluations of health risks from rf radiation and will thus propagate the same views on the risks in agencies of different countries associated with the icnirp views (4, 5) . therefore, it is unlikely that they will change their opinions when participating in different organizations. furthermore, their competence in natural sciences, such as medicine, is often low or non-existent due to a lack of education in these disciplines (2) . therefore, any chance for solid evaluations of medical issues is hampered. additionally, it must be concluded that if the 'thermal only' dogma is dismissed, this will have wide consequences for the whole wireless community, including permissions for base stations, regulations of the wireless technology and marketing, plans to roll out 5g, and it would therefore have a large impact on the industry. this may explain the resistance to acknowledge the risk by icnirp, eu, who, ssm and other agencies. however, the most important aspects to consider are human wellbeing and a healthy environment. telecoms can make profit in a variety of ways, and wireless is just one of them. they have the capacity to maintain profits by using different techniques, such as optical fiber, that will provide more data with less rf radiation exposure. particularly when considering the liability, they are incurring in their misguided insistence of wireless expansion that may ultimately catch up to them in the form of lawsuits, such as those previously experienced by asbestos and tobacco companies (78, 79) . a recent book describes how deception is used to capture agencies and hijack science (80) . there are certain tools that can be used for this. one is to re-analyze existing data using methods that are biased towards predetermined results (23) . for example, this can be performed by hiring 'independent experts' to question scientific results and create doubt (81, 82) . as clearly discussed in a number of chapters of the books (80-82), front groups may be created to gain access to politicians and to influence the public with biased opinions. other methods may involve intimidating and harassing independent scientists that report health risks based on sound science, or removing all funding from scientists who do not adhere to the no-risk proindustry paradigm. another tool would be economic support and courting decision makers with special information sessions that mislead them on science and mask bribery (3,5,19,80-82 ). an industry with precise marketing goals has a big advantage over a loose scientific community with little funding. furthermore, access to regulatory agencies and overwhelming them with comments on proposed regulations is crucial (3) . to counteract all these actions is time consuming and not always successful (19) . nevertheless, it is important that these circumstances are explored and published in the peer-reviewed literature as historical notes for future use. based on the swiss and icnirp experiences, some recommendations can be made. one is to include only unbiased and experienced experts without cois for evaluation of health risks from rf radiation. all countries should declare a moratorium on 5g until independent research, performed by scientists without any ties to the industry, confirms its safety or not. 2g, 3g, 4g and wifi are also considered not to be safe, but 5g will be worse regarding harmful biological effects (42, 83, 84) . the authors of the present article recommend an educational campaign to educate the public about the health risks of rf radiation exposure, and safe use of the technology, such as the deployment of wired internet in schools (85) , as previously recommended by the european council resolution 1815 in 2011 (86) and the emf scientist appeal (87) . additionally, it is recommended that the government takes steps to markedly decrease the current exposure of the public to rf radiation, (88, 89) . notably, dna damage has been identified in peripheral blood lymphocytes using the comet assay technique, and in buccal cells using the micronucleus assay, in individuals exposed to rf radiation from base stations (90) . finally, an alternative approach to the flawed icnirp safety standards may be the comprehensive work of the european academy for environmental medicine (europaem) emf working group that has resulted in safety recommendations, which are free from the icnirp shortcomings (50) . recently, the international guidelines on non-ionising radiation (ignir) have accepted europaem safety recommendations (91). the bioinitiative group has recommended similar safety standards based on non-thermal emf effects (92) . who and all nations should adopt the europaem/bioinitiative/ignir safety recommendations, supported by the majority of the scientific community, instead of the obsolete icnirp standards. in conclusion, it is important that all experts evaluating scientific evidence and assessing health risks from rf radiation do not have cois or bias. being a member of icnirp and being funded by the industry directly, or through an industryfunded foundation, constitute clear cois. furthermore, it is recommended that the interpretation of results from studies on health effects of rf radiation should take sponsorship from the telecom or other industry into account. it is concluded that the icnirp has failed to conduct a comprehensive evaluation of health risks associated with rf radiation. the latest icnirp publication cannot be used for guidelines on this exposure. data sharing is not applicable to this article, as no datasets were generated or analyzed during the present study. lh and mc contributed to the conception, design and writing of the manuscript. both authors read and approved the final manuscript. not applicable. not applicable. who international agency for research on cancer monograph working group: carcinogenicity of radiofrequency electromagnetic fields iarc monographs on the evaluation of carcinogenic risks to humans: non-ionizing radiation as regards the deployment of the fifth generation, 5g, of wireless communication inaccurate official assessment of radiofrequency safety by the advisory group on non-ionising radiation world health organization, radiofrequency radiation and health -a hard nut to crack (review) international commission on non-ionizing radiation protection: guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to 300 ghz) international commission on non-ionizing radiation protection: icnirp statement on the 'guidelines for limiting exposure to time-varying electric, magnetic and electromagnetic fields (up to 300 ghz international commission on non-ionizing radiation protection (icnirp)1: guidelines for limiting exposure to electromagnetic fields (100 khz to 300 ghz) thermal and non-thermal health effects of low intensity non-ionizing radiation: an international perspective cancer epidemiology update, following the 2011 iarc evaluation of radiofrequency electromagnetic fields (monograph 102) mobile phone and cordless phone use and the risk for glioma -analysis of pooled case-control studies in sweden national toxicology program: ntp technical report on the toxicology and carcinogenesis studies in hsd:sprague dawley sd rats exposed to whole-body radio frequency radiation at a frequency (900 mhz) and modulations (gsm and cdma) used by cell phones report of final results regarding brain and heart tumors in sprague-dawley rats exposed from prenatal life until natural death to mobile phone radiofrequency field representative of a 1.8 ghz gsm base station environmental emission oxidative mechanisms of biological activity of low-intensity radiofrequency radiation evaluation of the genotoxicity of cell phone radiofrequency radiation in male and female rats and mice following subchronic exposure evaluation of mobile phone and cordless phone use and glioma risk using the bradford hill viewpoints from 1965 on association or causation appeals that matter or not on a moratorium on the deployment of the fifth generation, 5g, for microwave radiation environmental health trust: three-year moratorium on 4g and 5g in head of swiss radiation protection committee accused of 5g-swindle. nordic countries deceived the international commission on non-ionizing radiation protection: conflicts of interest, corporate capture and the push for 5g brain and salivary gland tumors and mobile phone use: evaluating the evidence from various epidemiological study designs berenis -swiss expert group on electromagnetic fields and non-ionising radiation office fédéral de l'environnement: téléphonie mobile et 5g: le conseil fédéral décide de la suite de la procedure département fédéral de l'environnement, des transports, de l'énergie et de la communication: groupe de travail téléphonie mobile et rayonnement: présentation d'un rapport factuel global groupe de travail téléphonie mobile et rayonnement: rapport téléphonie mobile et rayonnement. publié par le groupe de travail téléphonie mobile et rayonnement sur mandat du detec herausgegeben von der arbeitsgruppe mobilfunk und strahlung im auftrag des uvek un groupe de travail fédéral temporise sur les risques de santé et ne fixe pas de limite aux rayonnements emfscientist: international appeal: scientists call for protection from non-ionizing electromagnetic field exposure swiss research foundation for electricity and mobile communication: organisation swiss research foundation for electricity and mobile communication: publications swiss research foundation for electricity and mobile communication: annual report swiss research foundation for electricity and mobile communication: sponsors and supporters international commission on non-ionizing radiation protection (icnirp)1: icnirp note: critical evaluation of two radiofrequency electromagnetic field animal carcinogenicity studies published commentary on the utility of the national toxicology program study on cell phone radiofrequency radiation data for assessing human health risks despite unfounded criticisms aimed at minimizing the findings of adverse health effects iarc monographs priorities group: advisory group recommendations on priorities for the iarc monographs international commission on non-ionizing radiation protection: guidelines for limiting exposure to time-varying electric, magnetic and electromagnetic fields (100 khz to 300 ghz) international commission on non-ionizing radiation protection: commission a rc monog r aphs on t he eva lu at ion of carcinogenic risks to humans systematic derivation of safety limits for time-varying 5g radiofrequency exposure based on analytical models and thermal dose exposure of insects to radio-frequency electromagnetic fields from 2 to 120 ghz effects of electromagnetic fields on molecules and cells the effects of radiofrequency fields on cell proliferation are non-thermal comparing dna damage induced by mobile telephony and other types of man-made electromagnetic fields chromosome damage in human cells induced by umts mobile telephony radiation mechanism for action of electromagnetic fields on cells electromagnetic fields act via activation of voltagegated calcium channels to produce beneficial or adverse effects europaem emf guideline 2016 for the prevention, diagnosis and treatment of emf-related health problems and illnesses scientific committee on emerging and newly identified health risks (scenihr): opinion on potential health effects of exposure to electromagnetic fields (emf). european commission comments on scenihr: opinion on potential health effects of exposure to electromagnetic fields world health organization: radio frequency fields: environmental health criteria monograph consultation on the scientific review for the upcoming who environmental health criteria microwave news: will who kick its icnirp habit? non-thermal effects hang in the balance. repacholi's legacy of industry cronyism world health organization: 5g mobile networks and health pooled analysis of case-control studies on acoustic neuroma diagnosed 1997-2003 and 2007-2009 and use of mobile and cordless phones mobile phone use and brain tumours in the cerenat case-control study cellular telephones and cancer--a nationwide cohort study in denmark review of four publications on the danish cohort study on mobile phone subscribers and risk of brain tumors the emf call: scientists and ngos call for truly protective limits for exposure to electromagnetic fields (100 khz to 300 brain tumour risk in relation to mobile telephone use: results of the interphone international case-control study increasing rates of brain tumours in the swedish national inpatient register and the causes of death register mobile phones, cordless phones and rates of brain tumors in different age groups in the swedish national inpatient register and the swedish cancer register during 1998-2015 brain tumours: rise in glioblastoma multiforme incidence in england 1995-2015 suggests an adverse environmental or lifestyle factor microwave news: spike in 'aggressive' brain cancer in denmark brain cancers: 4 times more new cases of glioblastoma in 2018 according to public health france indication of cocarcinogenic potential of chronic umts-modulated radiofrequency exposure in an ethylnitrosourea mouse model tumor promotion by exposure to radiofrequency electromagnetic fields below exposure limits for humans mobile phone use and brain tumors in children and adolescents: a multicenter case-control study childhood brain tumour risk and its association with wireless phones: a commentary kein erhöhtes hirntumorrisiko bei kindern und jugendlichen wegen handys reassuring results from first study on young mobile users and cancer risk swedish radiation safety authority: declaration of disqualification, conflicts of interest and other ties for experts and specialists of the swedish radiation safety authority electromagnetic radiation safety: icnirp's exposure guidelines for radio frequency fields swiss research foundation for electricity and mobile communication: list of funded research projects swiss research foundation for electricity and mobile communication: sponsors and supporters secret ties in asbestos -downplaying and effacing the risks of a toxic mineral greenwashing: the swedish experience the triumph of doubt: dark money and the science of deception doubt is their product. how industry's assault on science threatens your health corporate ties that bind. an examination of corporate manipulation and vested interest in public health towards 5g communication systems: are there health implications? 5 g wireless telecommunications expansion: public health and environmental implications measurements of radiofrequency radiation with a body-borne exposimeter in swedish schools with wi-fi. front public health 5: 279 radiofrequency radiation from nearby mobile phone base stations-a case comparison of one low and one high exposure apartment compared with results on brain and heart tumour risks in rats exposed to 1.8 ghz base station environmental emissions international guidelines on non-ionising radiation: guidelines. ignir's latest independent guidelines on emf exposure are available now to download and use a rationale for biologicallybased exposure standards for low-intensity electromagnetic radiation swedish radiation safety authority: publications this work is licensed under a creative commons attribution-noncommercial the authors would like to thank mr. reza ganjavi for valuable comments. no funding was received. the authors declare that they have no competing interests. key: cord-017620-p65lijyu authors: rodriguez-proteau, rosita; grant, roberta l. title: toxicity evaluation and human health risk assessment of surface and ground water contaminated by recycled hazardous waste materials date: 2005-07-07 journal: water pollution doi: 10.1007/b11434 sha: doc_id: 17620 cord_uid: p65lijyu prior to the 1970s, principles involving the fate and transport of hazardous chemicals from either hazardous waste spills or landfills into ground water and/or surface water were not fully understood. in addition, national guidance on proper waste disposal techniques was not well developed. as a result, there were many instances where hazardous waste was not disposed of properly, such as the love canal environmental pollution incident. this incident led to the passage of the resource conservation and recovery act (rcra) of 1976. this act gave the united states environmental protection agency regulatory control of all stages of the hazardous waste management cycle. presently, numerous federal agencies provide guidance on methods and approaches used to evaluate potential health effects and assess risks from contaminated source media, i.e., soil, air, and water. these agencies also establish standards of exposure or health benchmark values in the different media, which are not expected to produce environmental or human health impacts. the risk assessment methodology is used by various regulatory agencies using the following steps: i) hazard identification; ii) dose-response (quantitative) assessment; iii) exposure assessment; iv) risk characterization. the overall objectives of risk assessment are to balance risks and benefits; to set target levels; to set priorities for program activities at regulatory agencies, industrial or commercial facilities, or environmental and consumer organizations; and to estimate residual risks and extent of risk reduction. the chapter will provide information on the concepts used in estimating risk and hazard due to exposure to ground and surface waters contaminated from the recycling of hazardous waste and/or hazardous waste materials for each of the steps in the risk assessment process. moreover, this chapter will provide examples of contaminated water exposure pathway calculations as well as provide information on current guidelines, databases, and resources such as current drinking water standards, health advisories, and ambient water quality criteria. finally, specific examples of contaminants released from recycled hazardous waste materials and case studies evaluating the human health effects due to contamination of ground and surface waters from recycled hazardous waste materials will be provided and discussed. after world war ii, industries began to produce a whole new generation of industrial and consumer goods made of synthetic organic chemicals such as plastics, solvents, detergents, and pesticides. industries profited enormously from the production and marketing of these products and consumers became accustomed to the convenience of synthetic products as well as cheap, convenient, throwaway packaging materials. as the industrial production of these products increased, so did the production, accumulation, and disposal of hazardous waste. prior to 1976, facilities that handled and/or disposed of hazardous waste were not provided with detailed regulations and/or guidance on proper waste handling/ disposal techniques and, as a result, there were many instances where hazardous waste was improperly disposed. when chemicals are improperly disposed in the environment, abandoned hazardous waste sites are created that potentially affect human health and cost our society billions of dollars due to the high cost of not only evaluating human health and environmental impacts but also to performing site clean-ups. an example of one of the most well known incidents of improper disposal of hazardous waste was the love canal environmental pollution incident [ 1] . this incident led to the passage of the resource conservation and recovery act (rcra) of 1976. this act gave the united states environmental protection agency (usepa) regulatory control of all stages of the hazardous waste management cycle from the "cradle-to-grave:' beginning in the 1970s, congress passed several other acts designed to protect human health and the environment ( table 1 ) . based on the legislative directives in these acts, the usepa has issued numerous rules, regulations, and guidance documents that ensure that the use, disposal, processing, and handling of hazardous waste do not result in impacts to human health or the environment. state governments are authorized to implement these rules/regulations promulgated by the usepa, to permit facilities that handle hazardous waste in their states, and to create additional state rules and state regulations that apply to the operations of facilities in their specific state. the emphasis in recent years is to prevent pollution by recycling hazardous waste followed by proper disposal practices. rcra defines recyclable materials as "hazardous waste that are reclaimed to recover a usable product:' recycling is a broad term that applies to those who use, reuse, or reclaim waste to use as an ingredient to make a product and to use as an effective substitute for a commercial product. a material is reclaimed if it is processed to recover a useful by-product or forms the starting material for the systematic scientific approach of evaluating potential adverse health effects resulting from human exposure to hazardous agents or situations occur by the following steps: i) hazard identification; ii) dose-response (quantitative) assessment; iii) exposure assessment; iv) risk characterization [ 4] . the overall objectives of risk assessment are to balance risks and benefits, to set target levels, to set priorities for program activities at regulatory agencies, industrial or commercial facilities, or environmental and consumer organizations, and to estimate residual risks and extent of risk reduction [ 5] . diversity of risk assessment methodology helps ensure that all possible risk models and outcomes have been considered and minimize the potential for error [ 4] . this section will provide information on the concepts used in estimating risk and hazard due to exposure to ground water and surface water contaminated from the recycling of hazardous waste and/or hazardous waste materials for each of the aforementioned steps. the first step in the risk assessment process is an evaluation of all human and animal data to determine what health effects occur after exposure to a chemical. well-conducted human studies are preferred, but occupational or accidental exposures to chemicals also provide useful information. however, in most cases, the results from animal studies are used as models to predict effects in humans since animal studies allow for controlled dose-response investigations and detailed, thorough toxicological analysis. some toxicants produce health effects immediately following exposure such as air pollutants that can produce eye irritation in individuals after a few minutes of exposure. other effects, such as organ damage due to metals and solvents, may not become manifested for months or years after first exposure. the time from the first exposure to the observation of a health effect is called the latent period. the length of this period is dependent on various factors such as the type of pathology induced by the compound/chemical of potential concern (copc), dose, dose rate as well as host characteristics such as age at first exposure, gender, race, species, and strain. other host factors that influence susceptibility to environmental exposures include genetic traits; preexisting diseases; behavioral traits such as smoking; coexisting exposures; and medication and vitamin supplementation [ 5] . genetic studies include investigations of the effects of chemicals on the genes and chromosomes (genetic toxicology) and ecogenetics, a relatively new field, describes a host's genetic variation in predisposition and resistance to copc exposure [5] . ecogenetics involves studies of specific exposures ranging from pharmaceuticals known as pharmacogenetics, pesticides, inhaled pollutants, foods, food additives, to allergic and sensitizing agents [ 6] . moreover, induction of a health effect at the molecular level may occur after a single exposure, after repeated exposures, or after long-term continuous exposure. the length of the induction period may be a function of the same variables as the latent period. effective exposure time refers to the exposure time that occurred up to the point of induction [ 4] . ineffective exposure is readily observed in dose-response curves as a saturation of response in the high dose range. an experimental study must follow the subjects beyond the length of the minimum latent period to observe all effects and cases associated with exposure. under ideal circumstances, a study will follow subjects for their lifetime. lifetime follow-up is common for animal studies but uncommon for epidemiology studies [ 4] . qualitative assessment of hazard information should include a consideration of the consistency and concordance of the findings. such assessments should include a determination of the consistency of the toxicological findings across species and target organs, an evaluation of consistency across duplicate experimental conditions, and the adequacy of the experiments to detect the adverse endpoints of interest [5] . for consideration of whether a copc is a carcinogen, qualitative assessment of animal or human evidence is done by many agencies, including the usepa and the international agency for research on cancer (iarc). similar evidence classifications are used for both animal and human evidence categories by both agencies. these evidence classifications are used for overall weight-of-evidence (woe) carcinogenicity classification schemes where the alphanumeric classification levels recommended by usepa [7] are shown in table 2 . usepa's woe carcinogenicity classification schemes were first recommended in the guidelines for carcinogen risk assessment (usepa, 1986 , hereafter "1986 cancer table 2 usepa's carcinogenicity classification scheme [7] alphanumeric code evidence of noncarcinogenicity for humans; no evidence of carcinogenicity in adequate studies in at least two species or in both epidemiological and animal studies table 3 weight-of-evidence classification scheme for qualitative assessment of chemical mixtures from mumtaz and durkin [9] mechanistic understanding: i, ii, and iii i. direct and unambiguous mechanistic data ii. mechanistic data on related compounds iii. inadequate or ambiguous mechanistic data toxicologic significance: a, b, and c a. direct evidence of toxicologic significance of interaction b. probable evidence of a toxicologic significance based on related compounds c. unclear evidence of a toxicologic significance exposure modifiers: 1 and 2 1. anticipated exposure duration and sequence 2. different exposure duration or sequence 2.a. in vivo data 2.b. in vitro data 2.b.i. anticipated route of exposure 2.b.ii. different route of exposure mixture is additive (=),greater than additive (> ), orless than additive ( <). guidelines") [7] . however, the guidelines for carcinogen risk assessment, review draft (usepa, 1999,hereafter"1999 draft cancer guidelines") [8] recommend a woe narrative describing a summary of the key evidence for carcinogenicity. the 1999 draft cancer guidelines will serve as interim guidance until usepa issues final cancer guidelines [ 43] . for evaluating chemical mixtures of noncarcinogens, mumtaz and durkin [9] suggest the interaction data (i.e., independent joint action, similar joint action and synergistic action) and the qualitative and quantitative interaction matrix be taken into consideration when determining the hazard index. a qualitative woe scheme for evaluating chemical mixtures is shown in table 3 . the woe takes into consideration the copc, data, reference doses/concentrations, and hazard index based on additivity [10] . figure 1 illustrates each of the chemical mixture's woe determination by a symbol indicating the direction of the interaction followed by the alphanumeric expression in table 3 . the first two components are the major factors for ranking the quality of the mechanistic data to support the risk assessment. because toxicity studies must be evaluated to determine the quantitative dose-response relationship between the magnitude of exposure and the extent and severity of the adverse effect, a brief description of various toxicity tests will be provided. different methodologies are used to characterize doseresponse relationships, depending on whether or not the chemical has been identified as a carcinogen or noncarcinogen. carcinogens are assumed to pose some risk at any exposure level [4] . four classes of toxicant-induced health effects include: i) cancer: genotoxic and nongenotoxic mechanisms; ii) hereditary effects: genotoxic mechanisms; iii) developmental effects: genatoxic or nongenotoxic mechanisms; iv) organ/tissue effects: nongenotoxic mechanisms [4] . the evaluation of chemicals for acute toxicity is necessary for the protection of public health and the environment. acute toxicity is generally performed by the probable route of exposure in order to provide information on health hazards likely to arise from short-term exposure by that route (table 4 ) [11] . as shown in table 4 , there are four categories ranging from i to iv based on increasing doses. generally, acute studies evaluate oral, dermal, inhalation, and eye and skin irritation as well as dermal sensitization. the acute inhalation studies are performed from one to seven days while the intermediate studies are performed from seven days to several months [12] .an evaluation of acute toxicity data includes the relationship of the exposure to the copc and the incidence and severity of all abnormalities, gross lesions, body weight changes, effects on mortality, and any other toxic effects. an acute exposure is considered to be a one-time or short-term exposure with a duration of less than or equal to 24 h. acute toxicity testing is conducted toxity evaluation and human health risk assessment of surface and ground water 143 up to 7 days of exposure and subacute testing for 7-30 days. testing periods for the evaluation of developmental effects is less than 15 days since developmental toxicity can occur after short periods of exposure. sub chronic testing is typically conducted for 90 days to 1 year since subchronic exposures are considered to be multiple or continuous exposures occurring for approximately 10% of an experimental species lifetime. chronic exposures are assumed to be multiple exposures occurring over an extended period of time, or a significant fraction of the animal's or the individual's lifetime. to minimize the number of animals used and to take full account of their welfare, usepa recommends the use of data from structurally related substances or mixtures [ 11] . review of existing toxicity information on chemical substances that are structurally related to the copc may provide enough information to make preliminary hazard evaluations that may reduce the need for testing. for example, if a chemical can be predicted to have corrosive potential based on structure-activity relationships (sars), dermal or eye irritation testing does not need to be performed in order to classify it as a corrosive agent. all the human carcinogens that have been identified have produced positive results in at least one animal model. in the absence of adequate human data, it is plausible to regard agents and/or mixtures for which sufficient evidence of carcinogenicity in animals exists to be a possible carcinogenic risk to humans [5] . therefore, chemicals that cause tumors in animals are presumed to cause tumors in humans. in general, the most appropriate rodent bioassays are those that test the exposure pathways most relevant to human exposure pathways, i.e., inhalation, oral, dermal, etc. because it is feasible to combine bioassays together, it is desirable to tie these bioassays with mechanistic studies, biomarker studies, and genetic studies to understand the mechanism(s) of toxicity and/or carcinogenicity [13] . a typical experimental design includes two different species, both genders, at least 50 subjects per experimental group using near lifetime exposures. for dose-response purposes, a minimum of three dose levels should be used. the highest dose, typically the maximum tolerated dose, mtd, is based on the findings from a 90-day study to ensure that the test dose is adequate for the assessment of chronic toxicity and carcinogenic potential. the lowest dose level should produce no evidence of toxicity. in the oral studies, the animals are dosed with the copc on a 7-day per week basis for a period of at least 18 months for mice and hamsters and 24 months for rats [14] . for dermal studies, animals are treated with the copc for at least 6 h per day on a 7 -day per week basis for a period. a minimum of 24 h should be allowed for the skin to recover before the next dosing. the copc is applied uniformly over a shaved area that is approximately 10% of the total body surface area [14] . the animals are evaluated for an increase in number of tumors, size of tumors, and number of rare tumors seen and/or expressed. even without toxicity, a high dose may trigger events different from those triggered by low-dose exposures. also, these bioassays can be evaluated for uncontrolled effects by comparing weight vs time and mortality vs time curves [4] . if there is a divergence between the control group and the experimental group in the weight vs time curve, this indicates that there is a disruption of normal homeostasis due to high-level dosing. if there is a divergence in the mortality vs time curves, this indicates that there is an uncontrollable effect [4] . the national toxicology program (ntp) criterion for classifying a chemical as a carcinogen is that it must be tumorigenic in at least one site in one sex of f344 rats or b 6 c 3 f 1 mice. validation and application of short-term tests (stt) are important in risk assessment because these assays can be designed to provide information about mechanisms of effects. short-term toxicity experiments includes in vitro or short-term in vivo tests ranging from bacterial mutation assays to more elaborate in vivo short-term tests such as skin-painting studies in mice and altered rat liver foci assays. these studies determine if copcs are mutagenic, indicating they have the potential to be carcinogens as well. in general, stt are fast and inexpensive compared with the lifetime rodent cancer bioassays [5] . positive results of stt have been used to predict potential carcinogenicity. common stt include the following: ames salmonella/microsome mutagenesis assay (sal); assays for chromosome aberration (abs); sister chromatid exchange induction (sce) in chinese hamster ovary cells; the mouse lymphoma l5178y cell mutagenesis assay (moly). there are several limitations to stt such as: stt cannot replace long-term rodent studies for the identification of carcinogens; the available tests do not detect all classes of copcs that are active in the carcinogenic process such as hormones; and negative results from stt cannot rule out carcinogenicity [ 4] . the most convincing evidence for human risk is a well-conducted epidemiological study where an association between exposure to copc and a disease has been observed. these studies compare copc-exposed individuals vs non-copc-exposed individuals [5] . the major types of epidemiology studies are cross-sectional studies, cohort studies, and case-control studies. cross-sectional studies survey groups of humans to identify risk factors and disease. these studies are not very useful for establishing a cause-and-effect relationship. cohort studies evaluate individuals on the basis of their exposure to the copc under investigation. these individuals are monitored for development of disease. prospective studies monitor individuals who initially are diseasefree to determine if they develop the disease over time. in case-control studies, subjects are selected on the basis of disease status and are matched accordingly. the exposure histories of the two groups are compared to determine key consistent features. thus, all case-control studies are retrospective studies [5] . epidemiological findings are evaluated by the strength of association, consistency of observations, specificity, appropriateness of temporal relationship, dose responsiveness, biological plausibility and coherence, verification, and biological analogy [s].a disadvantage of epidemiological studies is an accurate measure of concentration or dose that the copc-exposed individuals receives is not available, so estimates must be employed to quantify the relationship between exposure and adverse effects. moreover, the control group is a major determinant of whether or not a statistically significant adverse effect can be detected. the various types of control groups are: regional general population; general population of a state; local general population; and workers in the same or a similar industry who are exposed to lower or zero levels of the toxicant under study [ 4] . dose-response assessment is the fundamental basis of the quantitative relationship between exposure to an agent and the incidence of an adverse response. the procedures used to define the dose-response relationship for carcinogens and noncarcinogens differ. for carcinogens, a non-threshold, zero threshold, dose-response relationship is used when there are known or assumed risks of an adverse response at any dose above zero. non-threshold toxicants include hereditary disease toxicants, genotoxic carcinogens, and genotoxic developmental toxicants. for noncarcinogens, a threshold, nonzero threshold is used to evaluate toxicants that are known or assumed to produce no adverse effects below a certain dose or dose rate. threshold toxicants include nongenotoxic carcinogens, nongenotoxic developmental toxicants, and organ/ tissue toxicants [ 4] . the two different approaches will be discussed separately in this section. the toxicity factors used to evaluate oral exposure and inhalation exposure are expressed in different units to account for the unique differences between these two routes of exposure. cancer slope factors (csfs), in units of (mg/kg/day)-t, and reference doses (rids), in units of mg/kg/day, are used to quantify the relationship between dose and effect for oral exposure whereas unit risk factors (urfs), in units of (jlg/m 3 )-t, and reference concentrations (rfcs), in units of mg/m\ are used to describe the relationship between ambient air concentration and effect for inhalation exposure. the urf and rfc methodology accounts for the species-specific relationships of exposure concentration to deposited/delivered doses to the respiratory tract by employing animal-to-human dosimetric adjustments that are different than those employed for oral exposure. the interaction with the respiratory tract and ultimate disposition are considered as well as the physicochemical characteristics of the inhaled agent and whether the exposure is to particles or gases. most important is the type of toxicity observed since direct effects on the respiratory tract (i.e., portal of entry effects) must be considered as opposed to toxicity remote to the portal-of-entry [15] . based on the differences between oral and inhalation exposure, route to route extrapolation of oral toxicity values to inhalation toxicity values may not be appropriate. please refer to appendix b of the soil screening guidance [16] for a discussion of issues relating to route-to-route extrapolation. carcinogenic assessment assumes that exposure to any amount of a carcinogenic substance increases carcinogenic risk. thus, zero risk does not exist (a non-threshold response) because there is no carcinogen exposure concentration low enough that will not increase risk of cancer. a genotoxic carcinogen alters the information coded in dna; thus, it is reasonable to assume that these agents do not have a threshold so that a risk of cancer exists no matter how low the dose. there are three stages of genotoxic carcinogenesis: initiation, promotion, and progression. initiation refers to the induction of an irreversible change in dna caused by a mutagen. the initiator may be a direct-activating carcinogen or a carcinogenic metabolite. promotion refers to the possibly reversible replication of initiated cells to form a "benign" lesion. promoters are not genotoxic or carcinogenic but they enhance the tumorigenic response initiated by a primary or secondary carcinogen when administered at a later time. complete carcinogens have initiation and promotion properties [ 4] . nongenotoxic carcinogenesis does not involve direct interaction of a carcinogen with dna. mechanisms of nongenotoxic carcinogenesis include an accelerated replication that may increase the frequency of spontaneous mutations or increase the susceptibility of dna damage. cancer may be secondary to organ toxicity and may occur only at high dose rates. moreover, many nongenotoxic cancer mechanisms are species-specific where the results from certain rodent species may not apply to human [4] . several approaches and models are used to provide estimates of the upper limit on lifetime cancer risks per unit of dose or unit of ambient air concentration, i.e., the csf or the urf, respectively. the upper bound excess cancer risk estimates may be calculated using models such as the one-hit, weibull, logit, log-probit,or multistage models [5, 17] . the linearized multistage model is considered to be one of the more conservative models and is typically used because the mechanism of cancer is not well understood and one model may not be more predictive than another one [7, 17] . because the risk assessor generally needs to extrapolate beyond the region of the dose-response curve for which experimentally observed data are available, models derived from mechanistic assumptions involve the use of a mathematical equation to describe dose-response relationships that are consistent with biological mechanisms of response [ 5] . "hit models" for cancer modeling assume that i) an infinite number of targets exist, ii) after a minimum of targets have been modified, the host will elicit a toxic response, iii) a critical target is altered if a sufficient number of hits occurs, and iv) the probability of a hit in the lowdose range is proportional to the dose of copc [18] . the one-hit linear model is the simplest mechanistic model where only one hit or critical cellular interaction is required for cell function to be altered. multi-hit models describe hypothesized single-target multi-hit events as well as multi-target events in carcinogenesis. biologically based dose-response (bbdr) modeling reflects specific biological process [5] . because a large number of subjects would be required to detect small responses at very low doses, several theoretical mathematical extrapolation models have been proposed for relating dose and response in the subexperimental dose range: tolerance distribution models, mechanistic models, and enhanced models. these mathematical models generally extrapolate low-dose carcinogenic risks to humans based on effects observed at the high doses in experimental animal studies. the linear interpolation model interpolates between the response observed at the lowest experimental dose and the origin. linear interpolation is recommended due to its conservatism, simplicity, and reliance because it is unlikely to underestimate the true-low dose risk [ 4] . there is no universally agreed upon method for estimating an equivalent human dose from an animal study. however, several methods are currently being used to obtain an estimate of the equivalent human dose. the first method calculates an equivalent human dose from an animal study by scaling the animal dose rate for animal body weight. to derive an equivalent human dose from animal data, the 1999 draft cancer guidelines recommend adjusting the daily applied oral doses experienced over a lifetime in proportion to bw 314 [8] . for noncarcinogens, an uncertainty factor is employed to estimate the equivalent human dose from an animal study if pharmacokinetic data is not available. noncarcinogenic dose-response assessment utilizes a point of effects method which selects the highest dosage level tested in humans or animals at which no adverse effects were demonstrated and applies uncertainty factors or margins of safety to this dosage level to determine the level of exposure where no health effects will be observed, even for sensitive members of the population. also, benchmark dose modeling may be conducted if the experimental data are adequate. animal bioassay data are generally used for dose-response assessment; however, the risk assessor is normally interested in low environmental exposures of humans, which are generally below the experimentally observable range of responses seen in the animal assays. thus, low-dose extrapolation and animal-to-human risk extrapolation methods are required and constitute major aspects of dose-response assessment. human and animal dose rates are frequently reported in terms of the following abbreviations, which are defined below: loel lowest observed effect level in mglkgâ·day, which produces a statistically or biologically significant effect loael lowest observed adverse effect level in mg/kgâ·day, which produces a statistically or biologically significant adverse effect noel no observed effect level in mg/kgâ·day, which does not produce a statistically or biologically significant effect noael no observed adverse effect level in mg!kgâ·day, which does not produce a statistically or biologically significant adverse effect. key factors in determining which noael or loael to use in calculating a reference dose (rid) is exposure duration. as mentioned previously, acute animal studies are typically conducted for up to 7 days, subacute studies for 7 to 30 days, and subchronic studies for 90 days to 1 year. chronic studies are conducted for a significant portion of the lifetime of the animal. animals may experience health effects during short-term exposure which may differ from effects observed after long-term exposure, so short-term animal studies less than 90 days should not be used to develop chronic rids except for the development of interim rids or developmental rids. exceptionally high quality >90 day oral exposure studies may be used as a basis for developing an rid whereas the inhalation route is preferred for deriving a rfc [15] . please note that the same approaches used to develop the rid are used to develop the rfc, the only difference being the route of exposure, animal-to-human dosimetric adjustments, and the units, (i.e., mg/m 3 for the rfc vs mg/kg/day for the rid). the highest dose level that does not produce a significantly elevated increase in an adverse response is the noael. the noael from the critical study should be used for criteria development, i.e., the health effect that occurs at the lowest dose. however, if a noael is not available, then the loael can be used if a loael to noael uncertainty factor (uf) is applied. significance generally refers to both biological and statistical criteria and is dependent on the number of dose levels tested, the number of animals tested at each dose, and the background incidence of the adverse response in the control groups [5] . noaels can be used as a basis for risk assessment calculations such as rids and acceptable daily intake values (adi). adi and rid values should be viewed as a conservative estimate of levels below which adverse affects would not be expected; exposures at doses greater than the adi or rid are associated with an increase probability (but not certainty) of adverse effects [19] . who uses adi values for pesticides and food additives to define "the daily intake of chemical, which during an entire lifetime appears to be without appreciable risk on the basis of all known facts at that time" [ 5] . in order to remove the value judgments implied by the words "acceptable" and "safety", the adi and safety factor (sf) terms have been replaced with the terms rid and up/modifying factors (mf), respectively. usepa publishes rids and rfcs in either iris or in the usepa's health effects assessment summary tables (heast) . rids and adi values (eqs. 1 and 2, respectively) are typically calculated from noael values divided by the uf and/or mf: the uncertainty factor (uf) may range from 1 to 10,000 depending on the nature and quality of the data and is determined by multiplying different ufs together to account for five areas of scientific uncertainty [20] . the uf is primarily used to account for a potential difference between the animal's and human's sensitivity to a particular compound. the ufh and uf a accounts for possible intraand interspecies differences, respectively. as mentioned previously, an ufs is used to extrapolate from a subchronic duration study to a situation more relevant for chronic study and an ufl is used to extrapolate from a loael to a noael. an uf 0 is used to account for inadequate numbers of animals, incomplete databases, or other experimental limitations. a modifying factor (mf} can be used to account for additional scientific uncertainties. in general, the magnitude of the individual ufs is assigned a value of one, three, or ten, depending on the quality of the studies used in developing the rid or rfc. this uf is reduced whenever there is experimental evidence of concordance between animal and human pharmacokinetics and when the mechanism of toxicity has been established. recently, benchmark dose modeling has been recommended by usepa instead of the noael approach. criticism of the noael approach exists because of its limitations, which include the following: i) the noael must be one of the experimental doses tested; ii) once the dose is identified, the remaining doses are irrelevant; iii) larger noaels may occur in experiments with few animals thereby resulting in larger rids; iv) the noael approach does not identify the actual responses at the noael and will vary based on experimental design. these limitations of the noael approach resulted in the benchmark dose (bmd) method [21] . the dose-response is modeled and the lower confidence bound for a dose (bmdl) at a specified response level, benchmark response (bmr), is calculated [5] . the bmdlx (with x representing the x percent bmr) is used as an alternative to the noael value for the rid calculations. thus, the calculation of the rid is shown in eq. (3}: advantages of the bmd approach includes: i) the ability to account for the full dose-response curve; ii) the inclusion of a measure of variability; iii) the use of responses within the experimental range; iv) the use of a consistent benchmark response level for rid calculations across studies [5] . there are numerous informational databases or resources that provide risk assessors essential information. usepa publishes rids, rfcs, csfs, and urfs in the integrated risk information system (iris) or in the health effects assessment summary tables (heast). the information in iris followed by heast should be used preferentially before all other sources. a recent review of other available resources was published in a special volume of toxicology, vol157, 2001. articles by poore et al. [ 22] and brinkhuis [ 23] provide a thorough review of u.s. government databases such as usepa's iris at http://www.epa.gov/ iriswebp/iris/, national center for environmental assessment (ncea),atsdr's chemical-specific toxicology profiles and acute, subchronic, and chronic minimal risk levels (mrls ), and hazdat at http:/ /www.atsdr.cdc.gov/hazdat.html, among many other databases. the reviewers provide advise for effective search strategies as well as strategies for finding the appropriate toxicology information resources. exposure occurs when a human contacts a chemical or physical agent. exposure assessment examines a wide range of exposure parameters pertaining to the environmental scenarios of people who may be exposed to the agent under study. the information considered for the exposure assessment includes monitoring studies of chemical concentration in environmental media and/or food; modeling of environmental fate and transport of contaminants; and information on different activity patterns of different population subgroups. the principal pathways by which exposure occurs, the pattern of exposure, the determination of copc intake by each pathway, as well as the number of persons and whether there are sensitive subpopulations that need to be evaluated are also included in the evaluation. in this step, the assessor characterizes the exposure setting with respect to the general physical characteristics of the site, the site copcs, and the characteristics of the populations on or near the site. hazard identification/evaluation consists of sampling and analysis of soil, ground water, surface water, air, and other environmental media at contaminated sites. a common method used in screening substances at a site is by comparison with background levels in soil or ground/ surface water [ 19] , determining if a chemical is detected or not and whether the detection limit for that chemical is less than reference concentrations as well as frequency of detection [24] . once a list of copcs have been identified at the site, the availability of chemical characteristics such as struc-ture, solubility, stability, ph sensitivity, electrophilicity, and chemical reactivity and toxicity data are collected and evaluated to ascertain the nature of health effects associated with exposure to these chemicals. in many cases, toxicity information on chemicals is limited. knowing the copc's characteristics can represent important information for hazard identification [s] .also, sars are useful in assessing the relative toxicity of chemically related compounds. during this phase of exposure assessment, the major pathways by which the previously identified populations may be exposed are identified. therefore, locations of contaminated media, sources of release, fate and transport of copcs, pathways and exposure points, routes of exposure (i.e., ingestion of drinking water, dermal contact when showering) and location and activities of the potentially exposed population are explored. for example, the common on-site pathways evaluated when conducting a rcra remediation baseline risk assessment where unauthorized chemical releases have occurred includes direct contact with soil either by ingestion of soil and/ or inhalation of volatile chemicals or contaminated dust [ 19] . the migration of chemicals off-site can occur via wind-blown dust and vapor emissions from soil, leaching of chemicals to ground water with subsequent movement off-site, and run-off surface water. these off-site chemicals can eventually accumulate in other transport media such that the copc ends up in vegetation crops, meat, milk, and fish that will eventually be consumed by humans. therefore, pathways, sources of release, locations of contaminated media, fate and transport of copcs, and location and activities of the potentially exposed population are explored. exposure points and routes of exposure (ingestion, inhalation) are identified for each exposure pathway. it is necessary to identify populations likely to receive especially high exposure and populations likely to be unusually sensitive to the chemical's effects. an example of possible point of exposures and exposure routes due to exposure to ground water or surface water (i.e., source medium) used for drinking water is shown in table 5 . please note that all of these exposure pathvolatilization from water air inhalation into enclosed space ways are typically not evaluated when doing a risk assessment on contaminated drinking water since the techniques and exposure parameters for evaluating these routes of exposure are not well developed. additional pathways to consider for surface water may include recreational exposures (i.e., swimming, boating), ingestion of contaminated fish, shellfish, etc., and dermal exposure to contaminated sediment. finally, an attempt should be made to develop a number of exposure scenarios. exposure scenarios are a combination of"exposure pathways" to which a single "receptor" may be subjected [25] . for example, a residential adult or child receptor may be exposed to all the exposure routes in table 5 (i.e., drinking water, showering/bathing, washing/cooking food, and volatilization from ground water or drinking water into an enclosed space). an industrial receptor may only be exposed through the drinking water pathway and volatilization from ground water into an enclosed space and not be exposed through showering/bathing or washing/ cooking, because these activities are not allowed at an industrial site. exposure scenarios are generally conservative and not intended to be entirely representative of actual scenarios at all sites. the scenarios allow for standardized and reproducible evaluation of risks across most sites and land use areas [ 25] . conservatism allows for protection of potential receptors not directly evaluated such as special subpopulations and regionally specific land uses. the magnitude, frequency and duration of exposure for each pathway are next evaluated. for each potential exposure pathway, the chemical doses received by each exposure route needs to be calculated. because chemical concentrations can vary, many different studies might be required to get a complete picture of the chemical's distribution patterns within the environment. off-site sampling and analysis are preferred methods to determine the exposure concentrations in the environmental media at the point of exposure. because sampling data forms the foundation of a risk assessment, it is important that site investigation activities are designed and implemented with the overall goals of the risk assessment to be performed [19] . for example, it is essential that appropriate analytical methods with proper quality assurance/quality control documentation be employed and that the analytical methods are sensitive enough to detect the copc at concentrations that are below health protective reference concentrations. after the sampling data is collected and evaluated, then statistical techniques may be used to calculate the representative concentration of copcs that will be contacted over the exposure area. different statistical techniques may be required for the determination of representative concentrations in ground water vs surface water [24] . fate and transport models can be used to estimate current concentrations in media and/or at locations for which sampling was not conducted. in addition, an increase in future chemical concentrations in media that are currently contaminated or that may become contaminated can be predicted by fate and transport modeling. detailed discussions of these models are contained elsewhere in this book. each scenario described in the exposure assessment should be accompanied by an estimated exposure dose for each pathway. once the exposure pathway is determined, then the estimated risks and hazards from each exposure pathway can be characterized. exposure estimates for the oral pathway are expressed in terms of the mass of substance in contact with the body per unit body weight per unit time (i.e., intakes) whereas exposure estimates from inhalation pathways are expressed as mass of substance per unit volume (i.e., inhalation concentrations). the general equation for calculating intakes (mg/kg/day) is as follows [24] : intake, the amount of chemical at the exchange boundary (mg/kg body weight-day) c copc concentration, average concentration contacted over the exposure period cr contact rate, the amount of contaminated medium contacted per unit time or event ef exposure frequency (days/year) ed exposure duration (years) bw body weight, the average body weight over the exposure period (kg) at averaging time or period over which exposure is averaged (days). each exposure pathway has slightly different variations of the above basic equation. please refer to appendix a for examples of equations used to calculate intakes for the major exposure pathway from ground and surface waters as well as examples of exposure parameters employed to calculate intakes: appendices a-1 and a-2, ingestion of drinking water; appendices a-3 and a-4, ingestion of contaminated fish tissue; appendices a-5 and a-6, dermal contact with contaminated water; and appendix a-7 inhalation of volatiles from contaminated ground water or surface water. please refer to kasting and robinson [26] and exposure to contaminants in drinking water [27] for additional information on the various issues involved in the assessment of dermal exposure to water. the exposure parameters (e.g., cr, ef, ed, bw, and at) for each pathway are derived after an extensive literature review and statistical analysis [ 28] . for example, information on water ingestion rates, body weights, and fish ingestion rates for adults, children, and pregnant women used to develop the national ambient water quality criteria were obtained from the following documents: exposure factors handbook [28]; national health and nutrition examination survey (nhanes iii) [29] ; and united states department of agriculture (usda) 1994-1996 continuing survey of food intakes [30] . exposure parameters may represent central tendency or average values or maximum or near-maximum values [24] . science policy decisions that consider the best available data and risk management judgments regarding the population to be protected are both used to choose appropriate exposure parameters. usepa emphasizes that exposure assessments should strive to achieve an overall dose estimate that represents a "reasonable maximum exposure (rme):' the intent of the rme is to estimate a conservative exposure scenario that is within the range of possible exposures yet well above the average case (above the 90 1 h percentile of the actual distribution). however, estimates that are beyond the true distribution should be avoided. if near maximum or maximum values are chosen for each exposure parameter, then the combination of all maximum values for each exposure parameter would result in an unrealistic assessment of exposure. using probabilistic risk assessment, cullen demonstrated that if only two exposure parameters were chosen at maximum or near maximum values, and other parameters were chosen at medium values, than the risk and hazards estimates represented a rme (>99% percentile level) [31] . risk assessors should identify the most sensitive parameters and use maximum or near-maximum values for one or a few of those variables. central tendency or average values should be used for all other parameters [24] . when central tendency and/or maximum values are chosen for exposure parameters used to calculate intake for an exposure pathway, single point estimates of risk and hazard are calculated (i.e., a deterministic technique). however, probabilistic techniques like monte carlo analysis can be employed to provide different percentile estimates of risk and hazard (i.e., soth percentile or 95th percentile estimates) as well as to characterize variability and uncertainty in the risk assessment. monte carlo simulation is a statistical technique by which a quantity is calculated repeatedly, using randomly selected values from the entire frequency distribution for an exposure parameter or multiple exposure parameters for each calculation. usepa recommends using computerized monte carlo simulations to provide probability distributions for dose and risk estimates by incorporating ranges for individual assumptions rather than a single dose or risk estimate [19] . using better estimates for the distribution of contaminant levels is a major focus of recent risk assessment research. to obtain such estimates, several techniques, such as generating subjective uncertainty distributions and monte carlo composite analyses of parameter uncertainty, have been applied [5] . these are approaches that can provide a reality check that is useful in generating more realistic exposure estimates [ 5] . also, high-end exposure estimates (heees) and theoretical upper-bound estimates (tubes) are now recommended for specified populations as well as calculation of exposure for highly exposed individuals [5] . heee represents an estimate of the exposure in the upper ninetieth percentile while tubes represent exposure levels that exceed exposures experienced by all individuals in the exposure distribution and assume limits for all exposure variables [5] . please refer to the policy for use of probabilistic analysis in risk assessment at the usepa and guiding principles for monte carlo analysis at http:/ /www.epa.gov/ ncea/mcpolicy.htm [32] . risk characterization, the last step in the risk assessment process, links the toxicity evaluation (hazard identification and dose-response assessment) to the exposure assessment. estimates of the upper-bound excess lifetime cancer risk and noncarcinogenic hazard for each pathway, each copc, and each receptor identified during the exposure assessment are calculated. another important component of risk characterization is the clear, transparent communication of risk and hazard estimates as well as an uncertainty analysis of those estimates to the risk manager. cancer risk is usually expressed as an estimated rate of excess cancers in a population exposed to a copc for a lifetime or portion of a lifetime [33] . oral intakes are multiplied by the csf (eq. 5), dermal intakes are multiplied by the csf adjusted for gi absorption (eq. 6), and lifetime average inhalation concentrations are multiplied by the urf (eq. 7) to obtain risk estimates. for evaluating the risk from oral exposure, the intakes from all ingestion pathways can be summed (i.e., ingestion of drinking water, ingestion of fish, etc.), then the total intake is multiplied by the csf, as follows: (5) intakeoral the combined amount of copc from all oral pathways at the exchange boundary (mg/kg/day) (appendices a-1 to a-4) csf cancer slope factor (mg/kg/day)-1 â�¢ for evaluating dermal exposure, the dermally absorbed dose (dad) is calculated (appendices a-5 and a-6) and multiplied by an adjusted csf, csf dermalâ· the csf is typically derived based on oral dose-response relationships that are based on administered dose, whereas the dermal intake estimates are based on absorbed dose. therefore, if the csf is based on an administered dose, it should be adjusted for gastrointestinal absorption, if gastrointestinal absorption is significantly less than 100% (e.g., <50%) [34] . therefore, if an estimate of the gastrointestinal absorption fraction (absgi) is available for the compound and absgi is less than 50% [34, 35] , then the oral dose-response factor, based on an administered dose, can be converted to an absorbed dose basis by dividing the csf by the absgr to form a csf derma!: where dad dermally absorbed dose (mg/kg/day) (appendices a-5 and a-6) (6) csfdermal dermal cancer slope factor (mg/kg/day)-1 ; csfdermat=(csf/absg1). when absgr values are not available from bast and borges [35] for a compound, then usepa region 4 [36] recommends the following defaults for absg 1 : 80% for volatile organics; 50% for semi-volatile organics and nonvolatile organics; and 20% for inorganics. for evaluation of inhalation exposure, the lifetime average inhalation concentration is multiplied by the urf: where cinh concentration of copc at the exchange boundary (mg/m 3 ) (appendix a-7) urf unit risk factor (jlg/m 3 )-1 â�¢ to obtain a conservative total risk estimate, the risks for an individual copc from each pathway is summed and then the risks from all copcs are summed (eq. 8): where (8) rtotal sum of all risk estimates from all i 1 h copcs from all pathways. however, usepa is still developing approaches to deal with the uncertainties associated with combining risk estimates of chemical mixtures across different routes of exposure (i.e., inhalation, oral, and dermal) since differences in the properties of the cells that line the surfaces of the air pathways and the lungs, the gastrointestinal tract and the skin may result in different intake patterns of chemical mixtures components depending on the route of exposure. another consideration in dealing with chemical mixtures is the chemicals in a mixture may partition to contact media differently [37] .a risk estimate of 1x10 6 , 1x10 5 , or 1 x 10 4 is interpreted to mean that an individual has not more than, and likely less than, a 1 in 1 ,000,000, 1 in 100,000, or 1 in 10,000 chance, respectively, of developing cancer from the exposure being evaluated. the range of carcinogenic risks acceptable by the usepa is iq-6 to iq-4 â�¢ for chronic exposures to noncarcinogens, the intake of a copc is compared to the appropriate rid (i.e., oral rid or rid dermal) or rfc to form the hazard quotient (hq) [33] . oral intakes are compared to the rid (eq. 9), dermally absorbed doses (dads) are compared to the rid dermal (i.e., rid adjusted for gi absorption refer to the previous section for a discussion of procedures for adjusting toxicity factors for gi absorption) (eq. 10), and inhalation intakes are compared to the rfc (eq. 11) to obtained hazard quotients for each route of exposure: l intakeoral hqoral = rjd (9) where intakeoral the combined amount of copc from all oral pathways at the exchange boundary (mg/kg/day) (appendices a-1 to a-4) rid oral reference dose (mg/kg/day) the total hazard index (hi) for an individual copc from all routes of exposure is the sum of the hqs from all applicable pathways (oral, dermal, or inhalation) (eq.12): hi; = l hq; (12) where hii the sum of the hazard quotients from all relevant pathway for the i 1 h co pc. in order to be conservative, a total hi can be calculated by summing the his from each individual copc. "if the overall hi value is less than one, public health risk is considered to be very low"; however, "if the hi value is equal to or greater than one, then the exposure assessment and hazard characterization should be investigated more thoroughly:' if the hi exceeds one, then the hazard estimates may be refined by grouping the copcs that affect the same target organ or have the same mechanism of action and adding only the his from similar-acting copcs [37, 38] . ideally, chemicals would be grouped according to effect-specific toxicity criteria, information on chemicals exhibiting multiple effects would be available, and their exact mechanism of action would be known. instead, rids and rfcs are available for just one of the several possible endpoints of toxicity for a chemical and data are often limited to gross toxicological effects in an organ or an entire organ system. the list of these specific endpoints of toxicity is limited so it is best to consult a toxicologist during this step of the hazard evaluation [16] . each copc and exposure pathway needs to be calculated to determine the actual risk. the hi provides a rough estimate of possible toxicity and requires careful interpretation [38] . the hi does not account for the number of individuals who might be affected by exposure or the severity of the effects. usepa recommends that a hi of 1.0 be used for noncancer health effects. in the "real world", exposures generally involve complex mixtures of copcs. there are three basic actions for mixtures: i) independent joint action, which describes copcs that act independently and have different modes of action and are not expected to affect the toxicity of one another; ii) similar joint action or "dose addition;' which describes a mixture where the copcs produce similar but independent effects; iii) synergistic action, which the effective mixture cannot be assessed from the individual ingredients but depends on knowledge of their combined toxicity [39] (table 3 and fig. 1 ). the total hi can exceed the target hazard level as a result of the presence of either one or more copcs with an hq exceeding the target hazard level or the summation of several copc-specific hqs that are each less than the target hazard level. it is important to mention that the numbers generated by risk assessors should not be viewed as either accurate measures or even predictors of rates of adverse health effects in human populations [33] . the calculated estimates are routinely based on assumptions recognized as being conservative. thus, these numbers should be used as tools open to interpretation on a site-by-site basis. it is important for the risk manager to be informed of the uncertainties during the risk assessment process. significant limitations and uncertainties can exist throughout the entire risk assessment process; thus, it is important that a discussion of uncertainty accompanies risk assessment analysis so that the limitations of the quantitative results are taken into consideration. both qualitative and quantitative methods have been developed to analyze the uncertainty associated with risk assessment. a quantitative analysis may be conducted using either a sensitivity analysis or a probability analysis. listed below are the various reasons why uncertainty exists in a risk assessment analysis [ 4, 19] : -deficient control groups -difference in smoking habits between an epidemiology study group and a risk group -differences or lack of consideration of pharmacokinetics and/or mechanism of toxicity between species -failure to diagnose or misdiagnosis the cause of mortality -inappropriate experimental study design -lack of knowledge regarding combined biological effects of exposure to multiple toxicants -limitation in data regarding nature and magnitude of levels in the environment -low-dose extrapolation from high-dose experimental conditions -reliance on mathematical models -toxicant interaction with another agent -use of animal studies in the determination of risk for humans 3 it is important to make clear the distinction between risk assessors and risk management. risk assessors generate risk estimates but a risk manager considers these risk estimates, other scientific information, and integrates it into societal decisions [ 40 ] . for example, risk managers consider data analysis, technical concerns, economic concerns, and social/political concerns in addition to comparing the risk estimates to an acceptable level set by federal or state health agencies [ 40 ] . generally, trade-offs or compromises are made for the lowest possible risk and society's demand for jobs and economic growth. examples of questions that may be asked by risk managers are: "is a particularly deadly type of cancer in a narrow population worse or better than widespread effects of a non-lethal nature? can this decision be successfully defended in court?" in general, risk management decisions may be based more on political and economic factors than risk factors [ 40 ] . risk assessment and risk management are an integral part of the contemporary regulatory scene. risk management refers to the selections and implementation of the most appropriate regulatory action based on: i) goals; ii) social and political factors; iii) available control technology; iv) costs and benefits; v) results of risk assessment; vi) acceptable risk; vii) acceptable number of cases [4] . another aspect to consider is cumulative risk in either the risk assessment or risk management phase. cumulative risk evaluation considers all "involuntary" risk to which a receptor may be exposed by a variety of environmental risks such as: i) automobile exhaust emissions; ii) leaking underground storage tanks; iii) untreated sewage; iv) agricultural land runoff; v) industrial process air emissions; vi) conventional combustion-related air emissions [ 41] . however, at the present time, definite guidance from usepa regarding the evaluation of cumulative risk is not available. a refined site-specific risk assessment takes into account the specific characteristics concerning the site, all relevant pathways a receptor is exposed to, and other site-specific information. this represents a "forward" calculation method where risk and hazard estimates are calculated. however, each state or usepa regional office may utilize slightly different exposure factors, exposure scenarios, target risk and hazard levels or different procedures to account for childhood exposure, cumulative risk, etc. it is a very time-and resource-intensive process, which involves numerous scientific policy decisions. however, risk and hazard estimates from a refined site-specific risk assessment typically provide more realistic estimates than a generic screening level risk assessment. in contrast, media-specific comparison values can be calculated based on a "backward" calculation method based on standardized equations, usepa toxicity values, standard exposure pathways or scenarios, default exposure factors, and conservative risk and hazard levels. usepa office of water has derived drinking water standards and health advisories to evaluate levels of contaminants in public drinking water supplies. to evaluate levels of contaminants in surface water, usepa publishes guidance documents [42] as well as national recommended water quality criteria. state and tribal agencies then develop water quality standards for each water body in the state based on usepa guidance and the use designation for the individual water body. usepa must review the proposed state water quality standards before they become legally enforceable standards. if drinking water standards and/ or state and tribal water quality standards are available for the copcs present at the site, these standards generally must be used to evaluate human health impacts to groundwater and/or surface water, respectively. water quality standards apply to surface waters of the united states, including rivers, streams, lakes, oceans, estuaries, and wetlands. water quality standards consist, at a minimum, of three elements: 1) the "designated beneficial use" or "uses" of a water body or segment of a water body; 2) the water quality "criteria" necessary to protect the uses of that particular water body; 3) an antidegradation policy. typical designated beneficial uses of water bodies include public water supply, propagation of fish and wildlife, recreation, agricultural water use, industrial water use and navigation. if information concerning copcs is not present in the drinking water and/or state and tribal water standards databases, or additional exposure pathways need to be included during the site assessment, then media-specific comparison values are available from the soil screening guidance [16] , several usepa regional offices, and individual state governments (table 6 ). these benchmark values may be used as a tool to perform initial site screenings or as initial cleanup goals, if applicable. the different media-specific comparison values are generic, but can be recalculated using more site-specific information and guidance provided on the applicable web addresses (table 6 ). however, they usually do not consider all potential human health exposure pathways or consider ecological concerns. many of the databases listed in table 6 also provide copc concentration in soil calculated with fate and transport models that are protective of ground water and surface water. if information concerning copcs is not present in these databases, or additional exposure pathways need to be included during the site assessment, then a detailed toxicity evaluation and risk assessment may need to be conducted based on state or other regulatory agency guidelines. usepa was granted authority to set drinking water standards by the safe drinking water act (sdwa) of 1974. the sdwa has since been amended in 1986 and 1996. the responsibility for implementing drinking water standards is delegated to states and tribes. usepa is responsible for identifying contaminants to regulate, establishing priorities for contaminants that are of the greatest concern, and then deriving national primary drinking water regulations. the sdwa is applicable to public water systems that provide water for human consumption through at least 15 service connections or regularly serve at least 25 individuals. the standards apply to the water delivered to any user of a public water system. the standards are not applicable to private wells, although state and local governments do set rules to protect users of these wells. owners are urged to test their wells annually for nitrate and coliform bacteria, to test their wells for other compounds if a problem is suspected, and to take precautions to ensure the protection and maintenance of their drinking water supplies. even though these drinking water standards do not apply to private wells, many states adopt them as ground water standards or use them to evaluate whether concentrations of contaminants in ground water are above a level of concern. the office of water establishes national primary drinking water standards, secondary drinking water regulations, as well as health advisories. the derivation of these standards, regulations, and health advisories are discussed below. the drinking water standards and health advisories tables may be reached from the office of science and technology ( ost) home page at http://www.epa.gov/ost. the tables are accessed under the ost programs heading on the ost home page. national primary drinking water standards are regulations the usepa sets to control the level of contaminants in the nation's drinking water. maximum contaminant level goals (mclgs) are the maximum level of a contaminant in drinking water at which no known or anticipated chronic adverse effect on the health of persons would occur, and which allows an adequate margin of safety. mclgs are non-enforceable public health goals. maximum contaminant levels (mcls) are enforceable standards that are set as close to mclgs as possible but take into consideration the availability of technology treatments and techniques as well as whether reliable analytical methods capable of detecting low concentrations of contaminants are available. the derivation of mclgs and mcls are discussed in the following sections. for noncarcinogens (not including microbial contaminants), the mclg is based on the rid. the definition and derivation of the rid has been discussed previously. the rid is first adjusted for an adult with body weight assumed to be 70 kg and consuming 2 l of water per day to produce the drinking water equivalent level (dwel): the dwel represents the concentration of a substance in drinking water that is not expected to cause any adverse noncarcinogenic health effects in humans over a lifetime of exposure and assumes the only exposure to the chemical comes from drinking water. however, exposure to the chemical can also occur through other pathways and routes of exposure. therefore, the mclg is calculated by reducing the dwel in proportion to the amount of exposure from drinking water relative to other sources (e.g., food, air). in the absence of actual exposure data, this relative source contribution (rsc) is generally assumed to be 20%. the final value is in mg/l and is generally rounded to one significant figure: mclg(mg!l) = dwel â· rsc (14) if the chemical is considered to be a class a orb carcinogen, then it is assumed that there is not a dose below which the chemical is considered safe. therefore, the mclg is set at zero. if a chemical is a class c carcinogen and scientific data provides information that there is a threshold below which carcinogenesis does not occur, then the mclg is set at a level above zero that is safe. prior to 1996, the mclg for class c carcinogens was based on an rid approach that applied an additional uncertainty factor of 10 to account for possible carcinogenic potential of the chemical. if there weren't any reported noncancer effects, then the mclg was based on a nominal lifetime excess cancer risk of 10-6 to 10-5 , if data were adequate. the office of water is now moving toward guidance contained in the 1999 draft cancer guidelines [8, 43] which allows standards for nonlin-ear carcinogens to be derived based on low dose extrapolation and a mode of action approach. for microbial contaminants that may present public health risk, the mclg is set at zero because ingesting one protozoa, virus, or bacterium may cause adverse health effects. usepa is conducting studies to determine whether there is a safe level above zero for some microbial contaminants. so far, however, this has not been established. as mentioned previously, maximum contaminant levels (mcls) are enforceable standards that are set as close to mclgs as possible but take into consideration the availability of technology treatments and techniques as well as whether reliable analytical methods capable of detecting low concentrations of contaminants are available. if there is not a reliable analytical method, than a treatment technique (tt) is set rather than an mcl. a tt is an enforceable procedure or level of technological performance, which public water systems must follow to ensure control of a contaminant. in addition, mcls take into account an economic analysis to determine whether the benefits of enforcing the standard justify the costs. for group a and group b carcinogens, mcls are usually promulgated at the 1 o-6 to 1 o-4 risk level. secondary drinking water regulations are non-enforceable federal guidelines that take into account whether a chemical produces cosmetic effects such as tooth or skin discoloration or aesthetic effects such as affecting the taste, odor, or color of drinking water. because there are at least 15 different contaminates (i.e., aluminum, chloride, copper, and fluoride) in drinking water that are not considered to be health threatening, secondary maximum contaminant levels (smcls) guidelines have been established for public water systems that voluntarily test the water. these secondary standards give the public water systems guidance on removing the contaminants. in most cases, the state health agencies and public water systems often monitor and treat their drinking water for secondary contaminants. in order to provide information and guidance concerning drinking water contaminants for which national regulations currently do not exist, the usepa health and ecological criteria division, office of water, in cooperation with the office of research and development prepares health advisories (ha). these detailed has are used to "estimate concentrations of the contaminant in drinking water that are not anticipated to cause any adverse noncarcinogenic health effects over specific exposure durations" [ 17] . they include a margin of safety to protect sensitive members of the population (e.g., children, the elderly, and pregnant women). has are not legally enforceable in the united states, are only used for guidance by federal, state and local officials, and are subject to change as new information becomes available. included in the has is information on analytical and treatment technologies. has are provided for acute or shortterm effects as well as chronic effects. the one-day ha, the ten-day ha and the longer-term ha are based on the assumption that all exposures to the contaminant comes from drinking water whereas the lifetime ha takes into account other sources such as food, air, etc. the following types of has have been developed [17] . one-day ha -the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to one day of exposure. a one-day ha is generally based on data from acute human or animal studies involving up to 7 days of exposure. the protected individual is assumed to be a 10-kg child with an assumed volume of drinking water (di) of 1 lingested/day. ten-day ha-the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to ten days of exposure. a ten -day ha is generally based on subacute animal studies involving 7-30 days of exposure. similarly to the one-day ha, the protected individual for the ten-day ha is assumed to be a 10-kg child with an assumed di of l l ingested/ day. longer-term ha-the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to approximately seven years (10% of an individual's lifetime) of exposure, with a margin of safety. a longer-term ha is generally based on subchronic animal studies involving 90 days to 1 year of exposure. the protected individual is assumed to be a 10-kg child with an assumed di of 1 l ingested/day and a 70-kg adult with an assumed di of 21 ingested/day. lifetime ha -the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for a lifetime of exposure. a lifetime ha is generally based on chronic or subchronic animal studies. the protected individual is assumed to be a 70-kg adult with an assumed di of 2 l ingested/day. a dwel is calculated and multiplied by a rsc of 20% to account for exposure to drinking water as well as other sources (food, air, etc.). therefore, the lifetime ha is derived similarly to the mclg. the following general formula is used to derive the one-day, ten-day, and the longer-term has and the dwel: health advisories for the assessment of carcinogenic risk (15) if a contaminant is recognized as a human or probable human carcinogen (groups a or b), a carcinogenic slope factor (csf) is derived based on techniques discussed above. the slope factor is then used to determine the concentrations of the chemical in drinking water that are associated with theoretical upper-bound excess lifetime cancer risks of w-4 , w-s, or w-6 â�¢ the following formula is used to calculate the concentration predicted to contribute an incremental risk level (rl) of 10-4 , w-5 , or 10-6 : cvw(mg.il) = ___ : : : . . _ where cow concentration in drinking water at desired rl (mg/l) rl desired risk level (lo-4 , 10-5 , or 70 assumed body weight of adult human (kg) csf carcinogenic potency factor for humans (mg/kglday}1 2 assumed water consumption of an adult human (l!day). (16) if a dwel was calculated for a class a, b, or c carcinogen based on an rid study, (i.e., noncarcinogen), then the carcinogenic risk associated with lifetime exposure to the dwel can be calculated to assist the risk manager for comparison in assessing the overall risks. the theoretical upper-bound cancer risk associated with lifetime exposure to the dwel is calculated as follows: 2lidâ· csf risk = dwel â· ----70 kg (17) toxity evaluation and human health risk assessment of surface and ground water 167 6 usepa is required by the clean water act of 1972 to develop, publish, and revise ambient water quality criteria (awqc). the awqc "involves the calculation of the maximum water concentration for a pollutant that ensures drinking water and/or fish ingestion exposures will not result in human intake of that pollutant (i.e., the water quality criteria level) in amounts that exceed a specified level based upon the toxicological endpoint of concern" [20] . in october 2000, usepa issued new guidelines [20] that replaced the 1980 awqc national guidelines [44] . the 2000 awqs guidelines incorporated significant scientific advances in the following key areas: cancer risk assessment (1986 cancer guidelines [7] vs the 1999 draft cancer guidelines) [ 8, 43] ; risk assessments for class c carcinogens using nonlinear low-dose extrapolation; non-cancer risk assessments (benchmark dose approach and categorical regression); exposure assessments (consideration of non-water sources of exposures); bioaccumulation in fish (bioaccumulation factors, bafs, are recommended for all compounds to calculate concentration in fish tissue). in addition, the procedures for deriving awqc under the cwa were made more consistent to the procedures for deriving mclg by the sdwa. this section will discuss guidelines from the methodology for deriving ambient water quality criteria for the protection of human health, hereafter referred to as the awqc methodology guidance [20] , accessible at http:/ /www.epa.gov/ost/ humanhealth/method/index.html. state and tribal environmental agencies are responsible for developing ambient water quality standards (awqs) for each water body in the state based on guidance provided by usepa [20] and the uses that water bodies have been designated for (i.e., drinking water supply, recreation, or fish protection, etc.). these designated uses are a part of the water quality standards, provide a regulatory goal for the water body and define the level of protection assigned to it. the watershed assessment, tracking&environmental results database (waters), accessible at http:/ /www.epa.gov/waters/ provides information on the water body designation for each individual state and tribe. the exposure pathways typically evaluated for awqc are direct ingestion of drinking water obtained from that water body and the consumption of fish/ shellfish obtained from that water body. when an awqc is set, anticipated exposures from other sources of exposure (e.g., food, air) are taken into account for noncarcinogenic effects, or carcinogenic effects evaluated by the margin of exposure (moe) approach (i.e., class c carcinogens, using the 1986 woe cancer guideline terminology). the amount of exposure attributed to each source compared to total exposure is called the relative source contribution (rsc) for that source. the rsc is typically set at 20% but if a site-specific assessment is conducted for a particular water body and it can be demonstrated that other sources of exposures are not likely to occur, then the rsc can be set as high as 80%. an exposure decision tree approach is described in the methodology guidance to assist in calculating a site-specific rsc for a water body [20] . the allowable dose (typically, the rid) is then allocated via the rsc approach to ensure that the criterion is protective enough, given the other anticipated sources of exposure: where awqc ambient water quality criterion (mg/l) rid reference dose for non-cancer effects (mg/kg-day) rsc relative source contribution factor to account for non-water sources of exposure -may be either a percentage (multiplied) or amount subtracted, depending on whether multiple criteria are relevant to the chemical bw human body weight ( default=70 kg for adults) di drinking water intake (default=2 l/day for adults) fli fish intake at trophic level (tl) i (i=2, 3, and 4) (defaults for total in-take=0.0175 kg/day for general adult population and sport anglers, and 0.1424 kg/day for subsistence fishers). trophic level breakouts for the general adult population and sport anglers are: tl2=0.0038 kg/day; tl3=0.0080 kg/day; and tl4=0.0057 kg/day bafi bioaccumulation factor at trophic level i (i=2, 3, and 4), lipid normalized (l/kg). the following equation is used for deriving awqc for chemicals evaluated with a nonlinear low-dose extrapolation (margin of exposure) based on guidance in the 1999 draft cancer guidelines: where pod point of departure for carcinogens based on a nonlinear low-dose extrapolation (mg/kglday), usually a loael, noael, or ledlo uf uncertainty factor for carcinogens based on a nonlinear low-dose extrapolation (unitless). for carcinogens, only two water sources (i.e., drinking water and fish ingestion) are considered when awqc are derived. awqc for carcinogens are determined with respect to the incremental lifetime risk posed by a substance's presence in water, and is not being set with regard to an individual's total risk from all sources of exposure [20] . the 1986 cancer guidelines are the basis for iris risk numbers that were used to derive the current awqc, except for a few compounds developed using the revised cancer guidelines [14, 45] . each new assessment applying the principles of the 1999 draft cancer guidelines [ 8, 43] will be subject to peer review before being used as the basis of revised, updated awqc. the cancer-based awqc was calculated using the risk specific dose (rsd) and other input parameters listed below. the rsd and awqc for carcinogens was calculated for the specific targeted lifetime cancer risk (i.e., iq-6 , iq-5 , iq-4 ), using the following two equations: where rsd risk specific dose (mg/kg/day) target cancer risk iq-6 , iq-5 , iq-4 (lifetime incremental risk) csf cancer slope factor (mg/kg-day)-1 (20) (21) exposure parameters based on a site-specific or regional basis can be substituted to reflect regional or local conditions and/or specific populations of concern. these include the relative source contribution, fish consumption rate, baf (including factors used to derive bafs such as concentration of particulate organic carbon applicable to the awqc (kg/l) or concentration of dissolved organic carbon applicable to the awqc (kg/l), percent lipid of fish consumed by target population, and species representative of given trophic levels. states and tribes are encouraged to make adjustments using the information and instructions provided in the awqc methodology guidance [20] . the national water quality standards database (wqsdb) at the web address, http:/ /www.epa.gov/wqsdatabase/, provides access to several wqs reports that provide information about designated uses, water body names, state numeric water quality standards, and epa recommended numeric water quality crite-ria. the wqsdb allows users the ability to compare wqs information across the nation using standard reports. some states and tribes use an incidental ingestion value (ii) instead of di value when the water body is used for recreational purposes and not as a source of drinking water. however, an ii value is not used to develop national awqc. the default value for ii is 0.01 l!day and is assumed to occur from swimming and other activities. the fish intake value is assumed to remain the same. besides protection of human health, awqc are developed based on other criteria such as organoleptic effects; aquatic life protection, sediment quality protection, nutrient criteria, microbial pathogens, biocriteria, excessive sedimentation, flow alterations, and wildlife criteria. for example, the national recommended water quality criteria table (http:/ /www.epa.gov/ost/standards/wqcriteria.html) lists freshwater or saltwater criteria maximum concentration (cmc) criteria values that are the acute limits for the priority pollutant for the protection of aquatic life in freshwater or saltwater. the freshwater or saltwater criterion continuous concentration (ccc) criteria value is the chronic limit for the priority pollutant for the protection of aquatic life in freshwater or saltwater, respectively. the table also includes criteria for organoleptic effects for 23 pollutants developed to prevent undesirable taste and/or odor imparted by them to ambient water. in some cases, a water quality criterion based on organoleptic effects or aquatic life protection would be more stringent than a criterion based on toxicologic endpoints. information and links to guidance documents relating to these subjects may be reached from the office of water, water quality criteria and standards program page at http://www.epa.gov/waterscience/standards/. as more knowledge is gained about the waste generated and disposed in landfllls by our society, there is great concern about the toxic effects that this waste has on our environment as well as animal and human health. over the past decade, there have been numerous attempts to recycle various waste products generated by our society. this section will review some of the recent literature on recycled hazardous waste materials. recycled concrete pavement as aggregate for the construction of highways can produce effluent with a high ph that can enter the underwater drains [46] . when portland cement is recycled, the concrete consists of limestones and minerals where 60-65% is lime ( cao ), silica (si0 2 }, alumina (al 2 0 3 }, and iron oxide (fe 2 0 3 ). ca(oh} 2 is formed sparingly in water and the saturated solution has a ph of 12.45 at 25 oc. the ph of the water effluent in underdrains is approximately 11-12. at this ph, the cac0 3 precipitates out and forms deposits on the screen [ 46] . the deposition on the screens produces clogging and scales to form in the underdrain; thereby, causing vegetative kill around the outlet. in addition to recycled concrete, rubber is recycled for asphalt pavements. recycling of rubber allows a means of disposal of scrap tires and reduces the quantity of construction materials for the asphalt. asphalt pavement contains hot mix asphalt with and without crumb rubber modifier. the use of rubber tires reduces the weight of the asphalt and provides good drainage media as well as extending the life of the asphalt [47] . while there is an apparent benefit for recycling rubber, it has been found by the minnesota pollution control agency that leaching can occur from the use of waste tires in sub grade roadbeds into the run -off water. in acidic conditions, leaching of barium, cadmium, lead, chromium, selenium, and zinc occurred from the asphalt while in basic conditions, there was leaching of polynuclear aromatic hydrocarbons. thus, the recommended allowable levels (rals) may be exceeded for drinking water standards in areas where there is recycled rubber in the asphalt. paper or wood itself does not contain any hazardous chemicals unless the paper undergoes recycling. the recycling process requires de-inking of waste paper prior to recovery of the fiber generating a sludge that contains particles of ink and fibers too short to be converted to a finished paper product [ 48] . the de-inking chemicals such as sulfur, chlorine, cadmium, and fluorine are present in the sludge generated. a sludge is any solid, semi-solid, or liquid waste generated from a municipal, commercial, or industrial wastewater treatment plant, water supply treatment plant, or air pollution control facility exclusive of treated effluent from a wastewater treatment plant [ 48] . thus, hazardous waste can be generated from the recycling paper process. a commodity used by numerous industries is plastic. because of the enormous amount of plastic disposed by consumers on a daily basis, it has become a common recycled item at many facilities. some metal sites will recycle the plastic insulation generated by their facility. the recycled plastic from such a facility generally includes metals such as lead, copper, manganese, and zinc, as well as dioxins, polychlorinated naphthalene, and polychlorinated biphenols. a leachate, contaminated run -off water, from the plastic "fluff" is formed during the recycling process. the leachate runs off into the water drains carrying hazardous chemical residue into soil and ground water. the plastic fluff is generally recycled on site into tiles, cushions, traffic cones, fenders, and highway barriers. the non-recyclable material and contaminated soil is generally taken to an off-site landfill. another common way to recycle plastic is to use the "sink-float" process where paper, fiber, and metal can be separated from the plastics and then recycled. the "sink-float" process uses water where the heavy items sink and the light items float [49] . it has been demonstrated that recycled plastics can be used as construction material as an alternative to lumber. this product is made from used bottles collected at curbside for recycling. the recycled plastics undergo sorting to remove unpigmented polyethylene milk/water jugs and polyethylene terephthalate carbonated beverage bottles. the leftover plastic material is referred to as curbside tailings (ct). ct consists of approximately 80% polyolefin (polyethylene and polypropylene) with the remaining percentages made of polyethylene terephthalate, polystyrene, polyvinyl chloride, and other plastics [50] . the ct product has reasonable strength compared to wood. we is et al. ( 1992) evaluated three ct recycled plastic formulations in fiddler crabs, snail, and algae [50] . it was found that limb regeneration of the fiddler crabs was accelerated with all three formulations but had no effect on fertilized eggs or larval developments formulations. there was a significant reduction in the sperm fertilization success rate [50] . furthermore, all three ct plastic formulation did not have an affect on the survival rate of snails or other algal species. the presence of metals in sludge and wastewater is a current problem. for instance, the use of sludge as a fertilizer of agricultural land generally receives cadmium (cd++) from aerial deposition and phosphatic fertilizers. cd++ is considered a hazardous chemical and has been shown to produce toxicity of the lung and kidney and to be carcinogenic in rats [51] . the highest concentration of cd++ is found in tobacco, lettuce, spinach, and other leafy products/vegetables. using crop uptake data from field trials, it is possible to relate potential human dietary intake of cd++ on which hazard depends, to soil concentrations of cadmium [52] . transfer via farm animals to meat and dairy products for human consumption is thought to be minimal even allowing for some direct ingestion of sludge-treated soil by the animals. background soil contains 0.1 to 1.0 mg cd++ /kg where 90% of cadmium is found in raw sewage that is converted to sludge. after the formation of sludge, 70% of the 90% cd++ is removed primarily by sedimentation. in order for cd++ uptake in roots to occur, cd++ must be in its soluble form adjacent to the root membrane for some finite period [52] . generally, a decrease in ph in soil will enhance the solubility of cd++, which will increase the crop uptake of cd++. in 1984, who/usepa agreed that the maximum acceptable daily uptake of cd++ was 70 f.lg/day. where 200 1-lg cd++jday over a 50-year period would be necessary to produce toxicity to the kidney. farm animals fed fodder crops grown on sludge-treated soil will absorb ~5% of the cd++ ingested [52] . in addition to recycling cd++, lead (pb )-edta wastewater also undergoes recycling. edta a chelating agent used in the soil washing process for the decontamination of pb contaminated soil. kim et al. [53] outlines a method to recycle pb-edta wastewater by substituting the pb-edta complex with feh ions at a low ph followed by precipitation of pb ions with phosphate or sulfate ions. feh ions-edta will precipitate at a high ph with naoh. the recycled edta solution can be recycled several times without losing its extractive power [53] . recycling computers can be extremely hazardous if not properly disposed. there are many parts of the computer that are toxic. to begin with, the cathode ray tube (crt) glass may be classified as a hazardous waste due to its high pb concentration. the liquid crystal display (lcd), which contains benzene material for the liquid crystal, is also considered hazardous. in addition, the mercury switch, mercury relay, lithium battery, ni-h battery, ni-cd battery, and polychlorinated biphenyl (pcb) capacitor are all hazardous materials. because of this, taiwan has recently established guidelines for the proper disposal of computer and/or computer parts [54] . the nine guidelines are: i) landfill or incineration of scrap computers shall be avoided; ii) the phosphorescent coatings which have been applied to the glass panel of crt must be removed; iii) all the batteries (li, ni-cd,ni-h) must be removed by non-destructive means; iv) all the pcb capacitors which have a diameter greater than 1 em and a height larger than 2 em must be removed; v) all the mercury containing parts must be removed; vi) crt must be ventilated before stored inside a building; vii) the high-ph content funnel glass of the crt must be properly treated; viii) the lcd of notebook computer must be removed by non-destructive means; and lastly, ix) plastic that contains the flameretardant, bromine, shall be treated properly. hopefully, this model can be used in other countries where computer waste is becoming a major issue of environmental concern. organic solvents have many applications in the industry such as formulation of products, thinning of products prior to use or cleaning of materials by removal of contaminants. during this application, solvent emission and waste solvent generation occur. most organic solvents are known to have adverse effects on both human health and the environment. solvents may affect the body through inhalation and skin contact and lead to either acute or chronic poisoning [55] . the effects of acute poisoning include narcosis, irritation of throat, eyes, or skin, dermatitis, and even death and the effects of chronic poisoning include damage to blood, lung, kidney, and gastrointestinal system and/or nervous system. in addition, many solvents are inflammable in nature. waste management of organic solvents includes: source reduction, recycling, treatment, and disposal [55] . case studies indicate that dry cleaning facilities use perchloroethylene (perc) in which workers around the cleaning machines are subject to high health risks; thus, vapor recovery systems are used to reduce the perc emissions especially from older machines [55] . riess et al. [56] evaluated the recyclability of flame-retarded polymers that contain brominated flame-retardants from 108 televisions (tv) and 78 personal computers (pcs) obtained from a recycling company. the flame-retardants identified in the tv were: 54% high-impact polystyrene, 24% acrylonitrile butadiene styrene, 15% polystyrene, and 7% polyphenyleneoxide polystyrene. the flame retardants found in pcs were: 43% acrylonitrile butadiene styrene, 35% polyphenyleneoxide polystyrene, 18% high-impact polystyrene, 3% polystyrene, and 1% polyvinyl chloride. recycling may be practical if 75% new material is added to mixture [56] . the denver potable reuse pilot project began in 1968 to recycle wastewater effluent to achieve potable water quality as well as being economically competitive with conventional technology. moreover, this project sponsored the first large-scale risk assessment studies using experimental animals [57] . after ten years, this pilot project was converted to a demonstration treatment plant to address many of the technical and non-technical issues. the objectives of the "reuse demonstration project were (i) to establish end product safety, (ii) to demonstrate the reliability of the process, (iii) to generate public awareness, (iv) to generate regulatory agency acceptance, and (v) to provide data for a largescale implementation" [57] . however, insuring end-product water safety proved to be difficult to demonstrate because the health standards established for drinking water were not intended to apply to treated waters. thus, additional criteria were used to prove that the effluent was suitable for human consumption. below were the criteria used in this project: -the product was compared with the national primary and secondary drinking water regulations values -the product was compared with federal or state regulated parameters -effluent levels were compared with the levels suggested to be hazardous -concentrations of product in the water were compared to denver's current drinking water criteria or other "acceptable" water supplies in the u.s. and/or worldwide -whole-animal studies (i.e., chronic toxicity, oncogenicity, and reproductive tests) were conducted using denver's current drinking water as a comparison standard the denver project used two dosage groups per water sample: reclaimed water from the demonstration plant with reverse osmosis treatment (rot) and denver drinking water from the foothills water treatment plant (dwt). ro and dw were administered to fischer 344 rats and b 6 c 3 f 1 mice at dosages at least 500 times the amount found in the original water samples. ultrafiltration water treatment samples (uft) were only administered to rats at the high dose (soox) and distilled water was used as the control in both the rats and mice studies. in addition to the chronic toxicity studies, reproductive toxicity studies were performed to identify potential adverse effects on reproductive performance, intrauterine development, and growth and development of the offspring. the teratology phase will identify potential embryotoxicity and teratogenicity. administration of ro, uft, and dw water at 500 times the amount found in the original water samples for 104 weeks in rats did not result in any toxicologic or carcinogenic toxicity [57] . the survival rate was slightly higher among the female rats (64%-84%) compared to the male rats (52%-70%). there were a variety of neoplasms seen in all treatment groups ( table 7) . the "c" cell tumors in the thyroids were not considered treatment related because these neoplasms were within the anticipated ranges for the age and strain of the rat. similar results were seen with the mouse chronic studies where there was no toxicity or carcinogenicity seen after 104 weeks of high dose treatment and the survival rate was identical to the rats. the organs most affected by the treatment were the hematopoietic system, liver, lung, and pituitary gland [57] . the remarkable finding of the reproductive studies was "the absence of treatmentrelated effects on reproductive performance, growth, mating capacity, survival of offspring, or fetal development in any of the treatment groups" [57] . the denver project met the outlined objectives at the start of the project and all three of the toxicity studies demonstrated that concentrations 500 times the original amount seen in sample water did not cause any notable toxicity. thus, secondary wastewater can be recycled into safe drinking water for human consumption. chemical mixtures have always been an issue of concern to address/assess the toxicity to the environment and to humans. an interagency agreement between atsdr and the ntp resulted in participation in a public health service (phs) activity related to the superfund act (cercla comprehensive environmental response, compensation and liability act) [58] . yang was the lead scientist at the national institute of environmental health sciences (niehs)/ntp for the development of the "superfund toxicology program". particular focus centered on chemical mixtures of environmental concern, especially groundwater contaminants derived from hazardous waste disposal and agricultural activities. yang states that obtaining a "representative" sample is practically impossible [58] . a core sample from one location of a site will definitely be different from a core sample from a different location of a site. also, a core sample taken from the exact location at different times of day and/or different days will be different because weather, activity at the site, and composition of the waste can change and degrade or synthesize new compounds. thus, yang proposed a strategy to study chemical mixtures [58] : 1. study chemical mixtures between binary and complex mixtures to avoid duplication of earlier studies that evaluated the two extremes 2. study chemically defined mixtures to make determination and mechanistic studies manageable 3. study chemical mixtures related to groundwater contamination because groundwater contamination is among the most critical environmental issue 4. study chemical mixtures at environmentally realistic concentrations to access the potential health effects of environmental pollution of long-term, low-level exposure 5. study chemical mixtures with potential for life-time exposure. a chemical mixture of groundwater contaminants from hazardous waste sites and agricultural activities were created. this formulation mixture contained 25-chemicals that simulated groundwater contamination as shown in table 8 . the concentrations selected represent the average survey values of 180 hazardous waste disposal sites representing all 10 usepa regions. even though such a mixture may never exist in reality, new insights may be gained to elucidate potential health effects from laboratory animals to human. for most of the end-points examined in this study, the results were negative. the negative results of this study were significant because various mixtures were tested at 10-to 100-fold or several orders of magnitude higher than potential human exposure levels [58] . insights gained from yang's project were: i) the effects will be subtle and marginal; ii) toxicologic interactions are possible at the environmentally realistic levels of exposure; iii) toxic responses may be from unconventional toxicologic end-point (immunosuppression, myelotoxicity); iv) possibility of subclinical residual effects may become more interactive with subsequent insults from chemical, physical, and/or biological agents; and v) negative results do not in-toxity evaluation and human health risk assessment of surface and ground water 179 dicate safety for humans because the studies were done on rodents. subsequent work on this mixture at low doses increased the acute toxicity of high doses of known hepatic and renal toxicants [59] . recently, niehs has begun to focus on simpler mixtures of chemicals that share common mechanisms of action rather than complex mixtures. over the past several decades, much effort has been made to establish national guidance on proper waste handling disposal techniques such that there are many local, state, national and federal agencies that provide guidelines to protect the surface and ground waters for humans. these guidelines also provide methods and approaches used to evaluate potential health effects and assess risks from contaminated source media, (i.e., soil, air, and water) as well as establish standards of exposure or health benchmark values in the different media, which are not expected to produce environmental or human health impacts. the use of the risk assessment methodology by various regulatory agencies using the following steps: i) hazard identification; ii) dose-response assessment; iii) exposure assessment; and iv) risk characterization balances the risks and benefits and sets the "acceptable" target levels of exposure to ground water and surface water. â�¢ for noncarcinogenic effects at=ed; for carcinogenic effects at=70 years. b exhibit 3-2 of the interim dermal guidance document [34) . different regulatory or state agencies may recommend different exposure parameters based on scientific policy or risk management decisions. waste management guide. the bureau of national affairs industrial waste recycling in: jessup dh (ed) waste management guide. laws, issues, and solutions. the bureau of national affairs revised rcra inspection manual oswer directive 9938 quantitative risk assessment for environmental and occupational health casarett and doull's toxicology: the basic science of poisons the emerging field of ecogenetics guidelines for carcinogen risk assessment 51 fr 33992 guidelines for carcinogen risk assessment. review draft. office of research and development a weight -of-evidence scheme for assessing interactions in chemical mixtures approaches and challenges in risk assessments of chemical mixtures. in: yang rsh (ed) toxicology of chemical mixtures health effect test guidelines: acute toxicity testing. us epa, office of prevention, pesticides, and toxic substances chlorethoxyfos-review of a repeated exposure inhalation study and evaluation of that study by the hazard identification assessment review committee. us epa, office of prevention, pesticides, and toxic substances biologic markers in risk assessment for environmental carcinogens health effect test guidelines: combined chronic toxicity/carcinogencity. us epa, office of prevention, pesticides, and toxic substances methods for derivation of inhalation reference concentrations and application of inhalation dosimetry. us epa, office of research and development soil screening guidance: technical background document. us epa, office of waste and emergency response health advisories of drinking water contaminants. us epa, office of water and health advisories assessment and management of chemical risks, vol1 risk assessment in the remediation of hazardous waste sites methodology for deriving ambient water quality criteria for the protection of human health issues in qualitative and quantitative risk analysis for developmental toxicology toxicology information resources at the environmental protection agency risk assessment guidance for superfund, voll. human health evaluation manual (part a) assessment protocol for hazardous waste combustion facilities can we assign an upper limit to skin permeability? international life science institute (ilsi) (1999) exposure to contaminants in drinking water, estimating uptake through the skin and by inhalation memorandum on body weight estimates based on nhanes iii data, including data tables and graphs. analysis conducted and prepared by westat, under epa contract number 68-c-99-242 usda ( 1998) 1994-1996 continuing survey of food intakes by individuals and 1994-1996 diet and health knowledge survey measures of compounding conservatism in probabilistic risk assessment guiding principles for monte carlo analysis epa/630/r-97/001 risk assessment forum chemical risk assessment numbers: what should they mean to engineers? risk assessment guidance for superfund, voll. human health evaluation manual (parte, supplemental guidance for dermal risk assessment derivation of toxicity values for dermal exposure supplemental guidance to ragss. region iv bulletins. human health risk assessment waste management division supplementary guidance for conducting health risk assessment of chemical mixtures guidelines for the health risk assessment of chemical mixtures the toxicity of poisons applied jointly a practical guide to understanding, managing, and reviewing environmental risk assessment reports addendum:region 6 risk management -draft human health risk assessment protocol for hazardous waste combustion facilities epa year 2000 guidance document. contract number 68-w1-0055 guidelines and methodology used in the preparation of health effect assessment chapters of the consent decree water criteria documents implementing the food quality protection act. us epa, office of prevention, pesticides, and toxic substance remediation of hazardous effluent emitted from beneath newly constructed road systems and clogging of underdrain systems assessment of water pollutants from asphalt pavement containing recycled rubber in rhode island the rhode island department of transportation waste-to-energy plant for paper industry sludges disposal: technical-economic study superfund at work: hazardous waste cleanup efforts nationwide. us epa, solid waste and emergency response toxicity of construction materials in the marine environment: a comparison of chromated-copper-arenate-treated wood and recycled plastic the control of the heavy metals health hazard in the reclamation of wastewater sludge as agricultural fertilizer cadmium -a complex environmental problem. part ii recycling of lead-contaminated edta wastewater management of scrap computer recycling in taiwan management, disposal and recycling of waste industrial organic solvents in hong kong analysis of flame retarded polymers and recycling materials chemosphere health effect studies on recycled drinking water from secondary wastewater. in: yang rsh ( ed) toxicology of chemical mixtures toxicology of chemical mixtures derived from hazardous waste sites or application of pesticides and fertilizers. in: yang rsh (ed) toxicology of chemical mixtures toxicology studies of a chemical mixture of 25 groundwater contaminants: hepatic and renal assessment, response to carbon tetrachloride challenge, and influence of treatment-induced water restriction texas natural resource conservation commission (1999) texas risk reduction program rule review draft addendum to the methodology for assessing health risks associated with indirect exposure to combustor emissions estimating exposure to dioxin-like compounds review draft development of human health-based and ecologically-based exit criteria for the hazardous waste identification project. office of solid waste i and ii cw â· ir â· ef â· ed intake=--------bw a a number of studies has shown that an age-adjusted approach should be used to calculate intakes for children for carcinogens to take into account the difference in ingestion rates, body weights, and exposure duration for children from 1 to 6 years old and others from 7 to 31 years [16] . b the exposure parameters were taken from the texas risk reduction program rule [60] and are provided as examples only. different regulatory or state agencies may recommend different exposure parameters based on scientific policy or risk management decisions. [20] . ' use only when a rid is based on health effects in children [20] . d the office of water is in the process of preparing an exposure assessment technical support document in which an age-adjusted approach will be used to calculate fish intakes for children for carcinogens to take into account the difference in ingestion rates, body weights, and exposure duration for children from 1 to 6 years old and others from 7 to 31 years [20] . for copcs whose log k0w<4.0 for copcs whose log kow>4.0 cf = cw â· baf for dioxins, furans, and polychlorinated biphenyls cf = csed â· bsaf cf =chemical concentration in fish (mg/kg), fresh weight (fw) cw =chemical concentration in water (mg/l) bcf =bioconcentration factor (l/kg fw)h baf =bioaccumulation factor (l/kg fw)b c,ed =chemical concentration in sediment (mg/kg) bsaf =biota-sediment accumulation factor (unitless)<â�¢ please refer to reference [25] for a detailed discussion of procedures used to calculate chemical concentration in fish. different regulatory or state agencies may recommend different procedures based on scientific policy or risk management decisions [20, 44] . b please refer to appendix a-3 of reference [25] for bcf, baf, and bsaf values and procedures for calculating these values. also, please refer to [20, 44] . c bsafs are used to account for the transfer of copcs from the bottom sediment to the lipid in fish [25, [61] [62] [63] . organic compounds non-steady state" not applicable for inorganics cinh =the concentration of copc at the exchange boundary (mglm 3 ) cw =chemical concentration in water (mg/l) vf =volatilization factor [(mglm 3 )/(mg!l-h20)]â�¢ ef =exposure frequency (days/year)h ed =exposure duration (years)h at =averaging time in years (period over which exposure is averaged)h a specific fate and transport models are used to derive volatilization factors to quantify the transfer of volatile copcs from ground water into an enclosed space, from ground and surface waters into ambient air, etc. these fate and transport models are discussed elsewhere in this book. b the exposure parameters for ef, ed, and at from appendix a -1 can be used for the residential adult, residential child, and commercial/industrial worker for some pathways, but site-specific exposure parameters may need to be developed for other pathways. key: cord-023747-mvq6353a authors: ascherio, alberto; munger, kassandra l. title: epidemiology of multiple sclerosis: environmental factors date: 2009-12-25 journal: nan doi: 10.1016/b978-1-4160-6068-0.00004-8 sha: doc_id: 23747 cord_uid: mvq6353a this chapter discusses the environmental factors associated to epidemiology of multiple sclerosis. the epidemiologic evidence points to three environ­mental risk factors—infection with the epstein-barr virus (ebv), low levels of vitamin d, and cigarette smoking—whose association with multiple sclerosis (ms) seems to satisfy in varying degrees most of the criteria that support causality, including temporality, strength, consis­tency, biologic gradient, and plausibility. none of these associations, however, has been tested experimentally in humans and only one––vitamin d deficiency is presently amenable to experimental interventions. the evidence, albeit more sparse and inconsistent, linking other environmental factors to ms risk are summarized. epidemiologic clues to the hypothetical role of infection in ms are com­plex and often seem to point in opposite directions. the ecological studies, database/linkage analyses, and longitudinal studies of sunlight exposure and vitamin d are reviewed. biologic mechanisms for smoking and increased risk of ms could be neuro­toxic, immunomodulatory, vascular, or they could involve increased frequency and duration of respiratory infections. some other possible risk factors include––diet and hepatitis b vaccine. data. such has been the case, for example, with interventions to reduce lung cancer incidence by reducing exposure to tobacco smoke. as discussed in this chapter, epidemiologic evidence points to three environmental risk factors-infection with the epstein-barr virus (ebv), low levels of vitamin d, and cigarette smoking-whose association with multiple sclerosis (ms) seems to satisfy in varying degrees most of the criteria that support causality, including temporality (i.e., the cause must precede the effect), strength, consistency, biologic gradient, and plausibility. none of these associations, however, has been tested experimentally in humans, and only one (vitamin d deficiency) is presently amenable to experimental interventions. this chapter also summarizes the evidence, albeit more sparse and inconsistent, linking other environmental factors to ms risk. for many years, it appeared that the "who, where, and when" of ms epidemiology was well understood. however, some aspects of ms epidemiology may be changing, notably the observations of an attenuation of the latitude gradient 3, 4 and the increasing female-to-male ratio. 5 in this section, we discuss the "classic" view of ms epidemiology, some of which has been known for more than 50 years, and then some recent developments that may provide new clues to the etiology of ms. ms is the most common neurologic disease in young adults. incidence rates are low in childhood and adolescence (<6/100,000/year) high in the middle to late twenties and early thirties (11 to 18/100,000/year in high-risk populations), and gradually decline thereafter, with rates less than 9/100,000/year among those older than 45 years of age. 3, 6 women are approximately twice as likely as men to develop ms, 7, 8 and the lifetime risk among white women is about 1 in 200. 3, 9 ms exhibits a worldwide latitude gradient, with high prevalence and incidence in northern europe, 7 canada, 10,11 the northern united states, 3, 12, 13 and southern australia 14 and decreasing prevalence and incidence in regions closer to the equator. 15 exceptions to the latitude gradient exist and include a lower than expected prevalence in japan 16 and higher than expected prevalence and incidence in the mediterranean islands of sardinia and sicily. 7 kurtzke 17 summarized the early descriptive studies by depicting areas of high (≥30/100,000), medium (5 to 29/100,000), and low (<5/100,000) prevalence of ms; we have updated his figures with more recent prevalence estimates 7, 16, [18] [19] [20] [21] [22] [23] [24] (fig. 4-1) . a more comprehensive review of ms incidence and prevalence worldwide was published in 2005. 18 it is important to note that differences in estimated incidence across countries or time periods can result from differences in study design, case ascertainment, or diagnostic criteria, rather than from real changes in disease occurrence. differences in prevalence are even more difficult to interpret, because they may reflect increased survival or earlier diagnosis, both of which can occur even if the incidence is the same. 25 in spite of these limitations, the collective data do support a higher risk of ms at higher latitudes, both north and south of the equator. the existence of the latitude gradient alone is not enough to support an environmental component, because it could be explained by genetic differences. 18, 26 however, studies of ms incidence and prevalence among migrant populations also support a role for environmental factors. these studies have limitations, in that migrants may be different from nonmigrants in socioeconomic and health status, may not utilize local health care resources, and therefore they may be less likely to be diagnosed; in addition, enumeration of the immigrant population for disease statistics may be difficult or impossible. 25, 27 nevertheless, migrant studies on ms collectively support a decreased prevalence of ms among those who migrate from high-to low-risk areas, particularly if the migration occurs before 15 years of age. 27 moreover, one study found a decreased prevalence of ms in all age groups among immigrants from europe to australia, suggesting that the protective effect may extend into adulthood as well. 28 studies within the united states have also supported a decreased risk of ms among migrants from northern (>41° to 42° n), australia and new zealand europe figure 4-1 worldwide prevalence estimates of multiple sclerosis. blue, more than 90 cases per 100,000 population; purple, 60 to 89/100,000; green, 30 to 59/100,000; orange, 5 to 29/100,000; yellow, fewer than 5/100,000; white, insufficient data. an asterisk indicates that data for that region or country are older and should be interpreted cautiously. high-risk parts of the country to southern (<37° n), low-risk regions. 29, 30 the study of u.s. veterans 30 is particularly compelling because of its large sample size and rigorous design. in this study, kurtzke observed that individuals who were born in the northern united states but migrated to the southern part of the country before joining the military had a 50% reduced risk of ms compared with those who did not migrate ( fig. 4-2) . fewer studies have been conducted among migrants from low-to high-risk areas. in general, these studies have found that a low risk of ms is retained after migration, but that the offspring of migrants have a higher risk of ms, similar to that in the host country. 27, [31] [32] [33] [34] [35] in the u.s. veterans study, 30 individuals who were born in the southern part of the country and migrated to northern states before entering the military had a 20% increased risk of ms, and those migrating from the middle tier of states to northern regions had a 31% increased risk (see fig. 4 -2). more recently, in a study conducted in the french west indies (a low-risk area), an increased risk of ms was found among individuals who had moved to france (a high-risk area) and then returned to the west indies. the increase in risk was greatest among those who migrated to france before the age of 15 years. 36 the incidence of ms appears to have been relatively stable over the past 50 years in several high-risk areas, including denmark 6 and the northern united states, 37 but there is some evidence that ms may be increasing in japan 16 and in parts of southern europe, most notably in sardinia. 38 interestingly, the island of malta has continued to experience low, stable rates of ms 39 despite its proximity to sardinia and sicily and a high frequency of the ms-associated hla-drb1*1501 allele. 40 there is also evidence of an increased female-to-male ratio in ms incidence. in canada, the female-to-male ratio apparently has increased from approximately 2:1 among individuals born in the 1930s and 1940s to approximately 3:1 among those born in the 1970s. 5 this change is strongly correlated with, and could be at least in part explained by, a sharp increase in the female-to-male ratio in smoking behavior (unpublished data), because smoking is a strong risk factor for ms (see later discussion). an attenuation of the latitude gradient was observed independently in a population of u.s. nurses 3 and in u.s. military veterans. 4 among nurses born between 1920 and 1946 and among veterans of world war ii or the korean conflict, those living in the northern tier of states (>41° to 42° n) had a greater than threefold increased risk of ms compared to those in the southern tier (<37° n). among vietnam and gulf war veterans, however, this gradient was attenuated to less than twofold, and among nurses born between 1947 and 1964 it completely disappeared ( fig. 4-3) . because the methods used to determine rates of ms in the early and later cohorts were the same, and because the individuals in the cohorts had similar socioeconomic status 3 or access to health care, 4 this attenuation was unlikely to be due to artifact. a change of this magnitude over such a short period of time argues for an environmental, rather than a genetic, explanation of the latitude gradient; as discussed later, this environmental factor may involve changes in patterns of infection or sun exposure, or both. further, the attenuation was probably caused by an increase in ms incidence in the southern united states, because incidence rates in the northern states, based at least on data from the longitudinal study in olmsted county, minnesota, seem to have remained relatively stable. 37 an attenuation of the latitude gradient in europe has also been observed; however, no systematic studies have assessed this gradient within the same population over time, and the attenuation therefore may be due to improved study methodology and case ascertainment, particularly within the united kingdom. 18 the possibility of an infectious cause was considered early in ms history, and numerous viruses and bacteria were, at different times, implicated as likely etiologic agents. the results of early studies, based on microscopic examination of pathologic material and attempts to transmit the disease to animals, often were null or spuriously positive because of contamination and could not be replicated. later, numerous serologic studies were conducted, often demonstrating significantly elevated antibody titers against several viruses in ms patients compared with healthy controls, but these differences were probably an epiphenomenon of the immune activation rather than being of etiologic significance. 41 in part as a consequence of these investigations, many researchers became skeptical about the existence of an infectious agent causing ms, and this skepticism persists today. epidemiologic clues to the hypothetical role of infection in ms are complex and often seem to point in opposite directions. on the one hand, results of family studies, including investigations of half-siblings, adopted children, and spouses of individuals with ms, support a strong genetic component as the leading explanation of ms clustering within families and provide little evidence of person-to-person transmission. 42 on the other, there are well-documented, albeit controversial, 43 reports of epidemics of ms, most notably in the faroe islands, 44 that are most easily explained by the introduction and transmission of an infectious agent. to reconcile these findings, it has been postulated that ms is a rare complication of a common infection, with the disease occurring in genetically or otherwise predisposed individuals. in this scenario, the epidemics would be a consequence of the introduction of the ms-causing agent for the first time in remote, previously naïve populations. 45 two hypotheses as to the nature of this infection have been proposed: (1) the responsible microorganism is more common in areas of high ms prevalence (the "prevalence" hypothesis), and (2) the ms-causing agent is ubiquitous and more easily transmitted in areas of low ms prevalence, where infection occurs predominantly in infancy, when it would be less harmful and more likely to confer protective immunity. the latter proposal is called the "poliomyelitis" hypothesis, by analogy with the epidemiology of poliomyelitis before vaccination. 46 the poliomyelitis hypothesis is also consistent with the higher prevalence of ms in communities with better hygiene, 47 in individuals with higher education, 48, 49 and in those with late age at infection with common viruses, 50 as well as the general lack of increase in ms incidence among individuals migrating from low-to high-prevalence areas. 27 however, the poliomyelitis hypothesis cannot explain the reduced risk of ms among migrants from high-to low-risk areas and, in fact, would predict an increase in ms risk in this circumstance, whereas the prevalence hypothesis is consistent with the observations. failure to identify a specific microbe as the cause of ms, despite evidence that is consistent with some role for infection in at least modulating ms risk, has strengthened support for a third, more general, "hygiene" hypothesis, according to which exposure to multiple infections in childhood primes the immune responses later in life toward a less inflammatory and a less autoimmunogenic profile. 51 the hygiene hypothesis can explain all the features of ms epidemiology that are explained by the original formulation of the poliomyelitis hypothesis. in addition, the protective effect of migration from high-to low-ms areas, which is paradoxical under the poliomyelitis hypothesis, could be beneficial because of increased exposure of migrants to parasitic and other infections in the low-risk area. at the population level, prevalence of ms is positively correlated with high levels of hygiene, as measured, for example, by prevalence of intestinal parasites. 52 the improving hygienic conditions in southern europe in the last few decades could explain the increased prevalence of ms reported in multiple surveys (although whether there was a true increase in ms incidence remains unsettled). 7 it is also interesting that infection with intestinal helminths, which is highly prevalent in developing countries, had been reported to cause an immune deviation with attenuation of helper t-cell 1 cellular immune responses and remission of ms. 53 finally, the hygiene hypothesis provides a convincing explanation for the observations that infectious mononucleosis (im) is associated with an increased risk of ms (relative risk [rr] = 2.3; p < .00000001) 54 and that the epidemiology of im is strikingly similar to that of ms (table 4 -1). 55 because im is common in individuals who are first infected with ebv in adolescence or adulthood 56 but rare when ebv infection occurs in childhood, it is a strong marker of age at ebv infection, which is itself strongly correlated with socioeconomic development across populations and with socioeconomic status within populations. 57 an exception to this pattern is seen in asia, where ebv infection occurs uniformly early in life and im is thus rare. it is noteworthy that the incidence of ms remains relatively low in asian countries, including japan, despite the fast industrialization and reduction of infectious diseases, 58 although there is evidence that the incidence may be increasing in japan. 16 according to the hygiene hypothesis, the association between im and ms risk does not reflect a causal effect of ebv but rather the indirect manifestation of a common cause; that is, both ms and im are the result of high hygiene and a resulting low burden of infection during childhood. an important prediction of this hypothesis is that ms risk will be high among individuals reared in a highly hygienic environment, even if they do not happen to be infected with ebv later in life, whereas, if ebv has a causal role in ms, individuals who are not infected with ebv would have a low risk of ms. 59 the data on this point are unequivocal: individuals who are not infected with ebv, even though they have the same hygienic upbringing as those with im, have an extremely low risk of ms (odds ratio [or] from metaanalysis = 0.06; p < .00000001) ( table 4 -2). the contrast could not be sharper or more consistent: ms risk among individuals who are not infected with ebv is at least 10-fold lower than that of individuals who are ebv-positive, and 20-fold lower than that of individuals with a history of im. 59 because studies in pediatric ms 60,61 rule out a common genetic resistance to ms and ebv infection, 59 we can conclude either that ebv itself or some other factor closely related to ebv is a strong causal risk factor for ms or that ms itself strongly predisposes to ebv infection. temporality is the only truly necessary criteria for causality. the association between ebv infection and ms is strong and consistent across multiple studies in different populations, and there is to some extent a biologic gradient (higher risk associated with severity of infection, as indicated by history of im). until recently, all studies on ms and infection used a cross-sectional design and could not completely rule out the possibility that ebv infection was a consequence rather than a cause of ms. however, the results of four longitudinal serologic studies have now been published (table 4-3) . [62] [63] [64] [65] the most consistent finding across these studies is that, among individuals who will develop ms, there is an elevation of serum antibodies against the ebv nuclear antigen 1 (ebna1) that precedes the onset of ms symptoms by many years. the presence of anti-ebna1 antibodies is a marker of past infection with ebv, because titers typically rise only weeks after the acute infection. further, there is no evidence in clinical studies of acute primary ebv infection in individuals with ms. 66 taken together, these results indicate that ms is a consequence rather than a cause of ebv infection. until recently, ebv had not been found in ms lesions, 67, 68 and therefore the link between ebv and ms was postulated to be mediated by indirect mechanisms. the leading hypothesis was that the immune response to ebv infection in genetically susceptible individuals cross-reacts with myelin antigens (molecular mimicry). the discovery that ms patients have an increased frequency and broadened specificity of cd4-positive t cells recognizing ebna1 69 and the identification of two ebv peptides (one of which is from ebna1) as targets of the immune response in the cerebrospinal fluid of ms patients 70 provided support to the molecular mimicry theory. other proposed hypotheses included the activation of superantigens, 71 an increased expression of alpha b-crystallin, 72 and infection of autoreactive b lymphocytes. 73 however, in a recent, rigorous pathologic study, 74 large numbers of ebvinfected b cells were found in the brain of most of ms patients. these cells were more numerous in areas with active inflammatory infiltrates, where cytotoxic cd8-positive t cells displaying an activated phenotype were seen contiguous to the ebv-infected cells. alone, these pathologic findings provide only suggestive evidence for a causal role of ebv in ms, because the infiltration of ebv-infected b cells could be secondary to the inflammatory process that is the hallmark of ms, but their convergence with the epidemiologic evidence described earlier 59 is so striking that noncausal explanations become improbable. however, independent replication of these findings is needed before any conclusion can be drawn. the strong increase in ms risk after ebv infection and (if confirmed) the presence of ebv in ms lesions suggest that antiviral drugs or a vaccine against ebv could contribute to ms treatment and prevention. although antiviral drugs have been tried in the past for ms treatment with borderline results, 75-77 none of the treatment regimens used was sufficiently effective against latent ebv infection. several aspects of ms epidemiology cannot be explained by ebv infection, indicating that other factors must contribute. 59 genes are clearly important, and it is of interest that the association between anti-ebna1 titers and ms risk has been found in both hla-drb1*1501-positive and hla-drb1*1501-negative individuals. 78 variations in ebv strains could also play a role, although evidence in support of this hypothesis remains limited. 79, 80 many other infectious agents have been hypothesized to be related to ms, mostly because of pathologic studies or their role in animal models. recent candidates include chlamydia pneumoniae, 81-84 human herpesvirus 6, 85-87 retroviruses, 88, 89 and coronaviruses, 90 but there are no convincing epidemiologic studies linking these infections to ms risk. noninfectious factors may also be important, and prominent among them are vitamin d and cigarette smoking. one of the strongest correlates of latitude is the duration and intensity of sunlight, which in ecologic studies is inversely correlated with ms prevalence. [92] [93] [94] because exposure to sunlight is for most people the major source of vitamin d, 95 average levels of vitamin d also display a strong latitude gradient. ultraviolet b (uv-b) radiation (290 to 320 nm) converts cutaneous 7-dehydrocholesterol to previtamin d 3 . previtamin d 3 spontaneously isomerizes to vitamin d 3 , which is then hydroxylated to 25(oh)d 3 (25-hydroxyvitamin d 3 ), the main circulating form of the vitamin, and then to 1,25(oh) 2 d 3 (1,25-dihydroxyvitamin d 3 ), the biologically active hormone. 95 however, during the winter months at latitudes greater than 42° n (e.g., boston, ma), even prolonged sun exposure is insufficient to generate vitamin d, 96 and levels decline. 97, 98 use of supplements or high consumption of fatty fish (a good source of vitamin d) or vitamin d-fortified foods (mostly milk in the united states) may partially compensate for this decline, but few people consume large enough amounts of vitamin d, and seasonal deficiency is common. a link between vitamin d deficiency and ms was proposed more than 30 years ago as a possible explanation of the latitude gradient and of the lower prevalence of ms in fishing communities with high levels of fish intake 99 ; however, the immunomodulatory effects of vitamin d were not known, and the hypothesis did not generate much interest at the time. after the discovery that the vitamin d receptor is expressed in several cells in the immune system and is a potent immunomodulator, 100 a series of experiments revealed a protective role of 1,25(oh) 2 d 3 in several autoimmune conditions and in transplant rejection. 100 the effects in experimental autoimmune encephalomyelitis, an animal model of ms, were particularly striking: injection of 1,25(oh) 2 d 3 was found to completely prevent the clinical and pathologic signs of disease, 101,102 whereas vitamin d deficiency accelerated the disease onset. 102, 103 with vitamin d deficiency becoming a biologically plausible risk factor for ms, several epidemiologic studies were conducted to determine whether exposure to sunlight or vitamin d intake is associated with ms risk. the main results of these studies are shown in table 4 -4, and their strengths and limitations are discussed in the following paragraphs. as mentioned earlier, the results of ecologic studies support an inverse association between sunlight exposure and ms risk. however, because people living in the same area share many characteristics other than the level of sunlight, the consensus is that evidence from these studies is weak. in an exploratory investigation based on death certificates, working outdoors was associated with a significantly lower ms mortality in areas of high, but not low, sunlight. 104 in a separate study in the united kingdom, the skin cancer rate, a marker of sunlight exposure, was found to be about 50% lower than expected among individuals with ms (p = .03). 106 although the results of these investigations are consistent with a protective effect of uv light exposure, they could also represent "reverse causation" (i.e., individuals with ms could reduce their exposure to sunlight after disease onset). the results of case-control studies comparing history of sun exposure in childhood (presumed to be a critical period, mostly from the results of studies in migrants) between ms cases and controls have been conflicting. the results of one study were contrary to a protective effect of vitamin d, 106 and no association between sun exposure in childhood and ms risk was found in another. 107 in contrast, results consistent with a protective effect of sun exposure were reported in a study in tasmania in which information on time spent in the sun was complemented by measurement of skin actinic damage, a biomarker of uv light exposure, 108 as well as an investigation in norway 109 and a study of monozygotic twins in the united states. 110 in the norway study, an inverse association was also found between consumption of fish and ms risk. selection and recall biases are potential problems in case-control studies, but recall bias cannot explain the inverse association observed in tasmania with actinic damage, 108 and selection bias is unlikely in the twin study. the strongest evidence relating vitamin d levels to ms risk has been provided by two longitudinal studies, one based on assessment of dietary vitamin d intake, and one on serum levels of 25(oh)d. the relation between vitamin d intake and ms risk was studied in more than 200,000 women in the nurses' health study and nurses' health study ii cohorts. 111 dietary vitamin d intake was assessed from comprehensive and previously validated semiquantitative food frequency questionnaires administered every 4 years during the follow-up of the cohorts. 112, 113 total vitamin d intake at baseline was inversely associated with risk of ms: the age-adjusted pooled relative risk (rr) comparing the highest with the lowest quintile of consumption was 0.67 (95% confidence interval [ci], 0.40 to 1.12; p for trend = .03). intake of 400 iu/day of vitamin d from supplements only was associated with a 40% lower risk of ms. these rrs did not materially change after further adjustment for pack-years of smoking and latitude at birth. confounding by other micronutrients cannot be excluded, but adjustments for them in the analyses did not change the results. because dietary vitamin d is only one component contributing to total vitamin d status (the other being sun exposure), a determination of whether serum levels of vitamin d are associated with ms risk in healthy individuals would strengthen the evidence in favor of a causal role for vitamin d. the serum level of 25(oh)d is a marker of vitamin d status and bioavailability; therefore, if vitamin d is protective, high serum levels of 25(oh)d would be expected to predict a lower risk of ms in healthy individuals. this question was recently addressed in a collaborative, prospective case-control study using the department of defense serum repository (dodsr). 114 the study included 257 military personnel with confirmed ms and at least two serum samples collected before the onset of ms symptoms. risk of ms was 51% lower among white individuals with 25(oh)d levels of 100 nmol/l or higher, compared with those levels lower than 75 nmol/l, and the reduction in ms risk associated with 25(oh)d levels � 100 nmol/l compared with those levels < 100nmol/l was considerably stronger before the age of 20 years (16 to 19 years) than at ages 20 or older. an important question concerning vitamin d and ms is the age intervals during which vitamin d may be important. the results of migration studies suggest that more pronounced changes in ms risk are likely to occur among individuals who migrate in childhood. the age of 15 years, chosen as an arbitrary cutoff point in early studies, is usually quoted in the literature, but the reality is that data are insufficient to identify a meaningful threshold above which migration would not alter ms risk, 27 and in at least one study a reduction in risk was also observed among individuals who migrated as adults. 115 the results of the casecontrol study in tasmania suggest that exposure to sunlight is mostly protective in childhood. 108 further, vitamin d exposure in utero has been proposed as a possible explanation for the peak in ms incidence among individuals born in may (whose mothers were not pregnant during the summer, when uv light levels are higher) and the dip among those born in november, according to recent data from canada and sweden. 116 on the other hand, the results of the longitudinal studies support a protective effect of vitamin d also later in life. both the lower risk of ms among women taking vitamin d supplements 111 and the lower risk among men and women with higher levels of 25(oh)d 114 would be difficult to explain by a protective effect of vitamin d solely in utero or during childhood. therefore, it seems likely that, if vitamin d effectively protects against ms, levels during early adult life are also important. overall, the epidemiologic evidence of a causal association between vitamin d and ms is strong but not compelling, mainly because there are few studies based on prospective measurement of levels of exposure to sunlight, vitamin d intake, or serum 25(oh)d concentration. however, the public health implications of a possible causal association are enormous. if vitamin d reduces the risk of ms, supplementation in adolescents and young adults could be used effectively for prevention. based on studies among individuals with low sun exposure, supplements providing between 1000 and 4000 iu/day of vitamin d would increase serum 25(oh)d to the optimal levels. [117] [118] [119] [120] there is an urgent need to conduct further longitudinal studies, preferably in a large, randomized controlled clinical trial assessing whether vitamin d supplementation in the general population prevents ms. the trial would have to be very large, because ms is a rare disease, but the sample size could be reduced by oversampling individuals who are at high risk, such as those with first-degree relatives who have ms. alternative study designs might include national or multinational studies based on randomization of school districts or other suitable units. cigarette smoking was found to increase the risk of ms in some 121, 122 but not all 123, 124 case-control studies. a cross-sectional survey of the general population in hordaland county, norway, found an increased risk of ms in ever-smokers compared with never-smokers (rr = 1.8; 95% ci, 1.1 to 2.9). 125 four prospective studies on smoking and ms have been conducted. among 17,000 british women in the oxford family planning association study, those who smoked 15 or more cigarettes per day were compared with never-smokers and had an 80% increased risk of ms (rr = 1.8; 95% ci, 0.8 to 3.6). 126 a total of 46,000 women from across the united kingdom were enrolled in the royal college of general practitioners' oral contraception study, which found that women smoking 15 or more cigarettes per day had a 40% increased risk of ms (rr = 1.4; 95% ci, 0.9 to 2.2), compared with never-smokers. 127 the nurses' health study and nurses' health study ii cohorts included more than 200,000 u.s. women; those who smoked 25 or more pack-years had a 70% increased risk (rr = 1.7; 95% ci, 1.2 to 2.4; p < .01) compared with never-smokers. 128 in a prospective case-control study in the general practice research database, which included both men and women, ever-smokers had a 30% increased risk of ms, compared with never-smokers (rr = 1.3; 95% ci, 1.0 to 1.7). 129 the suggestion of an increased risk of ms among smokers was consistent across all four studies, and pooled estimates of the relative risk were highly statistically significant when never-smokers were compared with past and current smokers (fig. 4 -4a) or with moderate and heavy smokers (see fig. 4 -4b). additional support for a role of smoking includes a twofold increase in risk of pediatric ms among children exposed to parental smoking 130 and an increased risk of transition to secondary progressive ms among individuals with relapsing-remitting ms 129 ; however, the latter finding was not confirmed in a recent investigation. 131 biologic mechanisms for smoking and increased risk of ms could be neurotoxic, 132 immunomodulatory, 133, 134 or vascular (i.e., increased permeability of the blood-brain barrier), or they could involve increased frequency and duration of respiratory infections, 135 which may then contribute to increased ms risk. smoking also appears to increase the risk of other autoimmune diseases, including rheumatoid arthritis [136] [137] [138] [139] [140] and systemic lupus erythematosus, 141 arguing for a more general effect of cigarette smoking on autoimmunity. although several foods or nutrients were found to be related to be ms risk in ecologic or case-control studies, the results overall were inconsistent and unconvincing. in ecologic studies, positive correlations were found between ms and intake of animal fat [142] [143] [144] and saturated fat, 144 as well as consumption of meat, 145 milk, and butter, 143, 146 and inverse correlations were found with intake of fat from fish 143, 145 and nuts 143 (sources of polyunsaturated fat). an increased risk of ms with increasing animal or saturated fat intake and a protective effect of increasing polyunsaturated fat intake were also reported in a case-control study, 147 but otherwise the results of case-control studies have largely not supported an association between increased ms risk and milk or meat consumption, 121, [147] [148] [149] [150] or decreased risk and consumption of sources of polyunsaturated fat such as fish or nuts. 121, 147 however, in a recent study in norway, 109 fish consumption 3 or more times per week among individuals living at latitudes between 66° and 71° n was inversely related to ms risk. other results have included an inverse association of risk with intake of vitamin c and juice, 147 but no association with other antioxidant vitamins 147 or with fruits and vegetables 121, 123, 147, 151 has been reported. it is important to note that ecologic studies are prone to be confounding and in general provide only very weak evidence of the potential effects of diet on disease risk. retrospective case-control studies are also prone to bias due to both control selection and differential recall. the latter effect is particularly problematic, because even a modest difference in diet recall between cases and controls can cause a large bias in relative risk estimates. 152 this problem is compounded in ms by changes in diet that may occur in the early clinical or preclinical phases of the disease. therefore, although these studies have been important in drawing attention to several aspects of diet as potentially important risk factors for ms, their results, whether in favor or against an hypothetical association, should be interpreted extremely cautiously. understanding of the relation between diet and ms will require the conduct of large longitudinal investigations, with repeated assessment of diet using rigorous and validated methods and possibly measurements of biomarkers of nutrient intakes. so far, the only prospective studies of diet and ms were those conducted among women in the two nurses' health study cohorts. in this population, neither animal fat nor saturated fat was associated with ms risk, but there was a suggestion of an inverse association with intake of the n-3 polyunsaturated fat linolenic acid. 153 there were also no significant associations between intakes of dairy products, fish, meat, 153 vitamins c or e, carotenoids, or fruits and vegetables and ms risk. 154 however, participants in these studies were already 25 to 55 years of age at time of recruitment, and therefore they shed little light on the possible effect of diet earlier in life and ms risk. studies have also examined whether intake of polyunsaturated fats affects ms progression. n-3 polyunsaturated fat supplementation in doses ranging from 2.85 to 3.90 g/day administered for periods of 6 to 24 months did not have significant effects on disability levels in two randomized controlled trials that included a total of 339 patients with relapsing-remitting ms, 155, 156 although trends were in favor of the supplemented groups in both studies. results of three randomized controlled trials examining the effects of n-6 polyunsaturated fat supplementation (17 to 20 g/day for 24 to 30 months) on ms progression, including a total of 279 patients with relapsing-remitting ms, [157] [158] [159] and a meta-analysis of these studies 160 suggested that supplementation may reduce the severity and duration of relapses. in summary, there is no compelling evidence that dietary factors other than vitamin d play a causal role in ms, but neither can such a role be excluded, particularly for diet during adolescence or childhood, which may be important periods in the etiology of ms. estrogen has been hypothesized to protect against ms, because in high levels it appears to promote the non-inflammatory type 2 immune response, rather than the pro-inflammatory type 1 response predominately seen in ms, and because during pregnancy, when estrogen levels are high, women with ms experience fewer relapses than during the puerperium. 161 in prospective studies, 126, 127, 162, 163 neither oral contraceptive use, parity, nor age at first birth 162 was associated with ms risk. a decreased risk of ms during pregnancy followed by an increased risk during the first 6 months after delivery was shown in a study based on a general practice database in the united kingdom. 163 in the same study, recent use of oral contraceptives was also associated with a reduced risk. 163 collectively, these studies suggest that short-term exposure to estrogen may be protective against ms, but that this protection is transient. concerns that the hepatitis b vaccine may increase the risk of ms were raised after widespread administration of the vaccine in france, 164 but the results of most studies have not supported a causal association. studies in the united states conducted among subjects included in a health care database, 165 among nurses, 166 and among participants in three health maintenance organizations 167 found no association between hepatitis b vaccination and risk of ms. further, in studies of children and adolescents, no association was found between hepatitis b vaccination and ms risk 168 or risk of conversion to ms among children with a first demyelinating event. 170 however, a case-control study conducted in the general practice research database in the united kingdom did find a threefold increased risk associated with receipt of the vaccine within 3 years before ms onset, 170 and a french case-control study reported a nonsignificant increased risk of ms among individuals with clinically isolated syndrome after vaccination. 171 among individuals with ms, the vaccine does not appear to increase the risk of relapses. 172 overall, there is no convincing evidence that hepatitis b vaccination increases ms risk. other environmental factors have been associated with ms, but the available evidence is sparse, and the relevance of these factors to ms etiology remains uncertain. an increased risk of ms has been reported in relation to exposure to organic solvents, [173] [174] [175] [176] [177] physical trauma, 178 and psychological stress from the loss of a child (bereavement), 179 whereas a decreased risk has been observed for use of penicillin 180 and antihistamines, 181 high levels of serum uric acid, [182] [183] [184] and tetanus toxoid vaccination. 185 epidemiology: principles and methods the contribution of changes in the prevalence of prone sleeping position to the decline in sudden infant death syndrome in tasmania geographic variation of ms incidence in two prospective studies of us women multiple sclerosis in us veterans of the vietnam era and later military service: race, sex, and geography sex ratio of multiple sclerosis in canada: a longitudinal study the danish multiple sclerosis registry: a 50-year follow-up the epidemiology of multiple sclerosis in europe epidemiology of multiple sclerosis in u.s. veterans: i. race, sex, and geographic distribution epidemiology of multiple sclerosis: incidence and prevalence rates in denmark 1948-64 based on the danish multiple sclerosis registry the frequency and geographic distribution of multiple sclerosis as indicated by mortality statistics and morbidity surveys in the united states and canada multiple sclerosis in new orleans, louisiana, and winnipeg, manitoba, canada: follow-up of a previous survey in new orleans, and comparison between the patient populations in the two communities latitude, migration, and the prevalence of multiple sclerosis the incidence and prevalence of reported multiple sclerosis multiple sclerosis in australia and new zealand: are the determinants genetic or environmental? ms epidemiology world wide: one view of current status multiple sclerosis in the japanese population epidemiologic evidence for multiple sclerosis as an infection the distribution of multiple sclerosis clinical and epidemiological profile of multiple sclerosis in a reference center in the state of bahia, brazil multiple sclerosis in latin america multiple sclerosis in kwazulu natal, south africa: an epidemiological and clinical study prevalence of multiple sclerosis in 19 texas counties incidence and prevalence of multiple sclerosis in olmsted county multiple sclerosis in isfahan, iran multiple sclerosis the dissemination of multiple sclerosis: a viking saga? a historical essay migrant studies in multiple sclerosis the age-range of risk of developing multiple sclerosis: evidence from a migrant population in australia multiple sclerosis and age at migration epidemiology of multiple sclerosis in us veterans: iii. migration and the risk of ms multiple sclerosis among immigrants in greater london motor neurone disease and multiple sclerosis among immigrants to britain motor neuron disease and multiple sclerosis among immigrants to england from the indian subcontinent, the caribbean, and east and west africa multiple sclerosis among the united kingdom-born children of immigrants from the west indies multiple sclerosis among united kingdom-born children of immigrants from the indian subcontinent, africa and the west indies role of return migration in the emergence of multiple sclerosis in the french west indies incidence and prevalence of multiple sclerosis in olmsted county multiple sclerosis prevalence among sardinians: further evidence against the latitude gradient theory multiple sclerosis in malta in 1999: an update hla-drb1 and multiple sclerosis in malta the possible viral etiology of multiple sclerosis genetics of multiple sclerosis analysis of the 'epidemic' of multiple sclerosis in the faroe islands multiple sclerosis in the faroe islands: i. clinical and epidemiological features multiple sclerosis in the faroe islands: an epitome multiple sclerosis and poliomyelitis epidemiological study of multiple sclerosis in israel: ii. multiple sclerosis and level of sanitation multiple sclerosis in australia: socioeconomic factors epidemiology of multiple sclerosis in us veterans: vii. risk factors for ms part iii: selected reviews. common childhood and adolescent infections and multiple sclerosis the effect of infections on susceptibility to autoimmune and allergic diseases the hygiene hypothesis and multiple sclerosis association between parasite infection and immune responses in multiple sclerosis infectious mononucleosis and risk for multiple sclerosis: a meta-analysis multiple sclerosis and epstein-barr virus epstein-barr virus epstein-barr virus the prevalence and clinical characteristics of ms in northern japan environmental risk factors for multiple sclerosis: part i. the role of infection epstein-barr virus in pediatric multiple sclerosis high seroprevalence of epstein-barr virus in children with multiple sclerosis epstein-barr virus antibodies and risk of multiple sclerosis: a prospective study an altered immune response to epstein-barr virus in multiple sclerosis: a prospective study temporal relationship between elevation of epstein barr virus antibody titers and initial onset of neurological symptoms in multiple sclerosis epstein-barr virus and multiple sclerosis: evidence of association from a prospective study with long-term follow-up association between clinical disease activity and epstein-barr virus reactivation in ms absence of epstein-barr virus rna in multiple sclerosis as assessed by in situ hybridisation is epstein-barr virus present in the cns of patients with ms? increased frequency and broadened specificity of latent ebv nuclear antigen-1-specific t cells in multiple sclerosis identification of epstein-barr virus proteins as putative targets of the immune response in multiple sclerosis an epstein-barr virus-associated superantigen ebv-induced expression and hla-drrestricted presentation by human b cells of alpha b-crystallin, a candidate autoantigen in multiple sclerosis infection of autoreactive b lymphocytes with ebv, causing chronic autoimmune diseases dysregulated epstein-barr virus infection in the multiple sclerosis brain acyclovir treatment of relapsing-remitting multiple sclerosis: a randomized, placebo-controlled, double-blind study a randomized, double-blind, placebo-controlled mri study of anti-herpes virus therapy in ms a randomized clinical trial of valacyclovir in multiple sclerosis integrating risk factors: hla-drb1*1501 and epstein-barr virus in multiple sclerosis a single subtype of epstein-barr virus in members of multiple sclerosis clusters epstein-barr virus genotypes in multiple sclerosis multiple sclerosis associated with chlamydia pneumoniae infection of the cns intrathecal antibody production against chlamydia pneumoniae in multiple sclerosis is part of a polyspecific immune response infection with chlamydia pneumoniae and risk of multiple sclerosis ioannidis a: chlamydia pneumoniae infection and the risk of multiple sclerosis: a meta-analysis human herpesvirus 6 and multiple sclerosis: systemic active infections in patients with early disease intrathecal antibody (igg) production against human herpesvirus type 6 occurs in about 20% of multiple sclerosis patients and might be linked to a polyspecific b-cell response human herpesvirus 6 and multiple sclerosis: a one-year follow-up study a putative new retrovirus associated with multiple sclerosis and the possible involvement of the epstein-barr virus in this disease the danish multiple sclerosis registry: history, data collection and validity human coronavirus oc43 infection induces chronic encephalitis leading to disabilities in balb/c mice some comments on the relationship of the distribution of multiple sclerosis to latitude, solar radiation, and other variables the prevalence of multiple sclerosis in australia geographical considerations in multiple sclerosis regional variation in multiple sclerosis prevalence in australia and its association with ambient ultraviolet radiation sunlight and vitamin d for bone health and prevention of autoimmune diseases, cancers, and cardiovascular disease influence of season and latitude on the cutaneous synthesis of vitamin d3: exposure to winter sunlight in boston and edmonton will not promote vitamin d3 synthesis in human skin safety and efficacy of increasing wintertime vitamin d and calcium intake by milk fortification serum 25-hydroxyvitamin d concentrations of new zealanders aged 15 years and older multiple sclerosis: vitamin d and calcium as environmental determinants of prevalence: a viewpoint. part 1: sunlight, dietary factors and epidemiology the immonological functions of the vitamin d endocrine system 1,25-dihydroxyvitamin d3 prevents the in vivo induction of murine experimental autoimmune encephalomyelitis 1,25-dihydroxyvitamin d3 reversibly blocks the progression of relapsing encephalomyelitis: a model of multiple sclerosis treatment of experimental autoimmune encephalomyelitis in rat by 1,25-dihydroxyvitamin d(3) leads to early effects within the central nervous system mortality from multiple sclerosis and exposure to residential and occupational solar radiation: a case-control study based on death certificates skin cancer in people with multiple sclerosis: a record linkage study epidemiologic study of multiple sclerosis in israel: i. an overall review of methods and findings epidemiological study of multiple sclerosis in western poland past exposure to sun, skin phenotype and risk of multiple sclerosis: a case-control study outdoor activities and diet in childhood and adolescence relate to ms risk above the arctic circle childhood sun exposure influences risk of multiple sclerosis in monozygotic twins vitamin d intake and incidence of multiple sclerosis the use of a self-administered questionnaire to assess diet four years in the past food-based validation of a dietary questionnaire: the effects of week-to-week variation in food consumption serum 25-hydroxyvitamin d levels and risk of multiple sclerosis the age-range of risk of developing multiple sclerosis: evidence from a migrant population in australia timing of birth and risk of multiple sclerosis: population based study circulating 25-hydroxyvitamin d levels indicative of vitamin d sufficiency: implications for establishing a new effective dietary intake recommendation for vitamin d estimates of optimal vitamin d status vitamin d supplementation, 25-hydroxyvitamin d concentrations, and safety human serum 25-hydroxycholecalciferol response to extended oral dosing with cholecalciferol epidemiologic study of multiple sclerosis in israel a case-control study of the association between socio-demographic, lifestyle and medical history factors and multiple sclerosis how multiple sclerosis is related to animal illness, stress and diabetes environmental risk factors and multiple sclerosis: a community-based, case-control study in the province of ferrara smoking is a risk factor for multiple sclerosis oral contraceptives and reproductive factors in multiple sclerosis incidence the influence of oral contraceptives on the risk of mulitple sclerosis cigarette smoking and incidence of multiple sclerosis cigarette smoking and the progression of multiple sclerosis parental smoking at home and the risk of childhoodonset multiple sclerosis in children cigarette smoking and progression in multiple sclerosis neuropathological changes in chronic cyanide intoxication immunomodulatory effects of cigarette smoke effects of tobacco glycoprotein (tgp) on the immune system: ii. tgp stimulates the proliferation of human t cells and the differentiation of human b cells into ig secreting cells the epidemiology of acute respiratory infections in children and adults: a global perspective oral contraceptives, cigarette smoking and other factors in relation to arthritis reproductive factors, smoking, and the risk for rheumatoid arthritis smoking, obesity, alcohol consumption, and the risk of rheumatoid arthritis cigarette smoking increases the risk of rheumatoid arthritis: results from a nationwide study of disease-discordant twins smoking and risk of rheumatoid arthritis smoking history, alcohol consumption, and systemic lupus erythematosus: a case-control study multiple sclerosis and nutrition diet and the geographical distribution of multiple sclerosis nutrition, latitude, and multiple sclerosis mortality: an ecologic study the risk of multiple sclerosis in the u.s.a. in relation to sociogeographic features: a factor-analytic study correlation between milk and dairy product consumption and multiple sclerosis prevalence: a worldwide study nutritional factors in the aetiology of multiple sclerosis: a case-control study in montreal, canada studies on multiple sclerosis in winnipeg, manitoba, and new orleans, louisiana: ii. a controlled investigation of factors in the life history of the winnipeg patients epidemiological study of multiple sclerosis in western poland milk consumption and multiple sclerosis-an etiological hypothesis risk factors in multiple sclerosis: a population-based case-control study in hautes-pyrenees nutritional epidemiology dietary fat in relation to risk of multiple sclerosis among two large cohorts of women intakes of carotenoids, vitamin c, and vitamin e and ms risk among two large cohorts of women a double-blind controlled trial of long chain n-3 polyunsaturated fatty acids in the treatment of multiple sclerosis low fat dietary intervention with omega-3 fatty acid supplementation in multiple sclerosis patients double-blind trial of linoleate supplementation of the diet in multiple sclerosis polyunsaturated fatty acids in treatment of acute remitting multiple sclerosis linoleic acid in multiple sclerosis: failure to show any therapeutic benefit linoleic acid and multiple sclerosis: a reanalysis of three double-blind trials rate of pregnancy-related relapse in multiple sclerosis: pregnancy in multiple sclerosis group oral contraceptives and the incidence of multiple sclerosis recent use of oral contraceptives and the risk of multiple sclerosis a shadow falls on hepatitis b vaccination effort no increase in demyelinating diseases after hepatitis b vaccination hepatitis b vaccination and the risk of multiple sclerosis vaccinations and risk of central nervous system demyelinating diseases in adults school-based hepatitis b vaccination programme and adolescent multiple sclerosis hepatitis b vaccine and risk of relapse after a first childhood episode of cns inflammatory demyelination recombinant hepatitis b vaccine and the risk of multiple sclerosis: a prospective study hepatitis b vaccination and first central nervous system demyelinating event: a case-control study vaccinations and the risk of relapse in multiple sclerosis. vaccines in multiple sclerosis study group organic solvents and multiple sclerosis: a synthesis of the current evidence exposure to organic solvents and multiple sclerosis multiple sclerosis and organic solvents organic solvents and the risk of multiple sclerosis the risk for multiple sclerosis in female nurse anaesthetists: a register based study the relationship of ms to physical trauma and psychological stress: report of the therapeutics and technology assessment subcommittee of the american academy of neurology the risk of multiple sclerosis in bereaved parents: a nationwide cohort study in denmark antibiotic use and risk of multiple sclerosis allergy, histamine 1 receptor blockers, and the risk of multiple sclerosis uric acid levels in sera from patients with multiple sclerosis serum uric acid and multiple sclerosis serum uric acid levels of patients with multiple sclerosis and other neurological diseases tetanus vaccination and risk of multiple sclerosis: a systematic review epstein-barr virus antibodies in multiple sclerosis epstein-barr virus infection and antibody synthesis in patients with multiple sclerosis epstein-barr nuclear antigen and viral capsid antigen antibody titers in multiple sclerosis increased prevalence and titer of epstein-barr virus antibodies in patients with multiple sclerosis viral antibody titers: comparison in patients with multiple sclerosis and rheumatoid arthritis the italian cooperative multiple sclerosis casecontrol study: preliminary results on viral antibodies the implications of epstein-barr virus in multiple sclerosis: a review altered antibody pattern to epstein-barr virus but not to other herpesviruses in multiple sclerosis: a population based case-control study from western norway altered prevalence and reactivity of anti-epstein-barr virus antibodies in patients with multiple sclerosis a role of late epstein-barr virus infection in multiple sclerosis exposure to infant siblings during early life and risk of multiple sclerosis key: cord-027950-4xwcb5j7 authors: bachman, thomas e.; iyer, narayan p.; newth, christopher j. l.; ross, patrick a.; khemani, robinder g. title: thresholds for oximetry alarms and target range in the nicu: an observational assessment based on likely oxygen tension and maturity date: 2020-06-27 journal: bmc pediatr doi: 10.1186/s12887-020-02225-3 sha: doc_id: 27950 cord_uid: 4xwcb5j7 background: continuous monitoring of spo(2) in the neonatal icu is the standard of care. changes in spo(2) exposure have been shown to markedly impact outcome, but limiting extreme episodes is an arduous task. much more complicated than setting alarm policy, it is fraught with balancing alarm fatigue and compliance. information on optimum strategies is limited. methods: this is a retrospective observational study intended to describe the relative chance of normoxemia, and risks of hypoxemia and hyperoxemia at relevant spo(2) levels in the neonatal icu. the data, paired spo(2)-pao(2) and post-menstrual age, are from a single tertiary care unit. they reflect all infants receiving supplemental oxygen and mechanical ventilation during a 3-year period. the primary measures were the chance of normoxemia (pao(2) 50–80 mmhg), risks of severe hypoxemia (pao(2) ≤ 40 mmhg), and of severe hyperoxemia (pao(2) ≥ 100 mmhg) at relevant spo(2) levels. results: neonates were categorized by postmenstrual age: < 33 (n = 155), 33–36 (n = 192) and > 36 (n = 1031) weeks. from these infants, 26,162 spo(2)-pao(2) pairs were evaluated. the post-menstrual weeks (median and iqr) of the three groups were: 26 (24–28) n = 2603; 34 (33–35) n = 2501; and 38 (37–39) n = 21,058. the chance of normoxemia (65, 95%-ci 64–67%) was similar across the spo(2) range of 88–95%, and independent of pma. the increasing risk of severe hypoxemia became marked at a spo(2) of 85% (25, 95%-ci 21–29%), and was independent of pma. the risk of severe hyperoxemia was dependent on pma. for infants < 33 weeks it was marked at 98% spo(2) (25, 95%-ci 18–33%), for infants 33–36 weeks at 97% spo(2) (24, 95%-ci 14–25%) and for those > 36 weeks at 96% spo(2) (20, 95%-ci 17–22%). conclusions: the risk of hyperoxemia and hypoxemia increases exponentially as spo(2) moves towards extremes. postmenstrual age influences the threshold at which the risk of hyperoxemia became pronounced, but not the thresholds of hypoxemia or normoxemia. the thresholds at which a marked change in the risk of hyperoxemia and hypoxemia occur can be used to guide the setting of alarm thresholds. optimal management of neonatal oxygen saturation must take into account concerns of alarm fatigue, staffing levels, and fio(2) titration practices. shifts in spo 2 exposure have a profound impact on neonatal outcomes. control of exposure is associated with the selection of a desired target range, selection of alarm limits as well as nursing compliance with good practices. manual titration of fio 2 to address unstable spo 2 is an arduous task. infants in the nicu typically spend only about half the time in the desired range, and there is significant variation among centers [1] . nursing intervention is driven by high and low spo 2 alarms, probably more than the prescribed target range. oximeter alarms are notorious for false positives and are associated with alarm fatigue [2] [3] [4] . a persistent low alarm necessitates the need for increased supplemental oxygen to minimize the impact of transient hypoxemia, usually a result of respiratory instability. in contrast, high alarms usually signal the need to titrate the oxygen down following recovery from a marked desaturation. if the alarm limits are too narrow or the response to aggressive, troublesome swings between hypoxemia and hyperoxemia can occur. further there is little evidence supporting guidelines and general practice with regard to selection of spo 2 alarm limits. even consensus international guidelines for extremely preterm infants are not consistent. european guidelines report there is weak evidence to support setting the alarms close to the desired target range [5] . clearly doing so increases the frequency of false alarms and the potential for alarm fatigue [3, 6] . the most recent guidelines from the american academy of pediatrics, in contrast, suggest looser low alarms are more appropriate [7] . they further suggest that spo 2 alarm limits and target range should not only be decoupled, but also take into account the infant's maturity. neither guideline integrates the possible impact of differences in averaging period, alarm delay or differences in devices. in the last two decades studies have focused on the intended spo 2 target ranges for the extremely premature with a resulting evolution of the standard of practice [1, 8] . the most recent very large studies suggest a higher, narrower target range might be preferred for extremely preterm infants [5, 9] . this perspective is, however, far from a consensus [8, [10] [11] [12] [13] . evaluations of the optimal spo 2 exposure for more mature infants are lacking. the risks associated with hypoxemia in near term infants are appreciated; however concerns about hyperoxemia have until recently been limited, at least compared to the extremely preterm. we have developed an extensive spo 2 -pao 2 database from our nicu and previously reported on the magnitude of the change of risk of severe hypoxemia and hyperoxemia across different spo 2 ranges [14] . the aim of this analysis was to see if specific spo 2 levels for selection of high and low alarms and target ranges could be identified based on the difference in the risk of hypoxemia and hyperoxemia and further to determine to what degree these thresholds might change depending on infant maturity. this is a prospectively defined analysis with the aim of describing arterial oxygenation levels (pao 2 ) associated with various possible spo 2 alarm limits and target ranges. the study is based on the paradigm that high and low spo 2 alarm limits should consider the risk of hypoxemia and hyperoxemia independent of the desired spo 2 target range and further consider infant maturity [7] . this study reflects infants in the neonatal and infant critical care unit (niccu) of children's hospital los angeles. it is a tertiary care referral center affiliated with the keck school of medicine of the university of southern california. the 58-bed niccu receives transfers from the greater southern california area. the bioethics review organization at children's hospital los angeles (chla-17-00236) has waived the need for informed consent for aggregate data analysis studies and specifically approved this project. in a previous publication we described the development of a spo 2 -pao 2 database of infants receiving mechanical ventilator support with supplemental oxygen between august 2012 and july 2015 [14] . the database links arterial blood gas measurements in laboratory records with simultaneous spo 2 data from the patient monitor system. the spo 2 level is the mean of four 30-s readings coincident with the arterial sample. the gestational age from medical records for each infant, along with the date of measurement permitted calculation of post-menstrual age for each sample. the oximeter in the patient monitoring system used masimo set technology (masimo corporation irvine, california), with 10 s averaging. continuous monitoring of spo 2 is by practice post-ductal, pre-ductal assessments are conducted with another oximeter. arterial samples were collected when clinically indicated. umbilical catheters are used in most infants in their first week of life. as a matter of practice after that right radial lines are preferred, but when not possible left radial or posterior tibial lines are placed. these study parameters were prospectively defined. normoxemia was defined as pao 2 between 50 and 80 mmhg. other oxemic levels were defined as severe hypoxemia (pao 2 ≤ 40 mmhg) and severe hyperoxemia (pao 2 ≥ 100 mmhg), we also evaluated levels below and above normoxemia (pao 2 < 50, > 80 mmhg). the selection of the severe thresholds was consistent with our previous publication. also a consensus of the investigators, the potential ranges of spo 2 alarm limits were 85-89% and 95-98% and spo 2 target ranges within the envelop of 88-95%. the endpoints were the chance of normoxemia, and the risk of the 4 oxemic levels. based on our previous work, we hypothesized that infant maturity would significantly impact the chance of normoxemia and risk of severe hyperoxemia and but not of severe hypoxemia. we used post-menstrual age (pma) as the metric of maturity. pma values were categorized into three groups. these were < 33 weeks, 33-36 weeks and > 36 weeks pma. we felt that categories would be of more use clinically than a continuous effect. on a post hoc basis we also explored the impact of postnatal age. our primary measure was the risk or chance of each of these oxemic categories within the relevant spo 2 range. for the power analysis we assumed a baseline of relevant risk or chance of 25%, and considered sample sizes of pao 2 values for both 150 and 300 in an adjacent spo 2 bins. the range of 150-300 was selected as this was consistent with the numbers of observations in the smaller maturity categories at the spo 2 extremes. based on this, we determined that there would be an 80% chance, at the p < 0.05 level, that we could detect a reduction to 12% with 150 observations and to 15% with 300 observations. we treated each spo 2 -pao 2 pair as an independent observation. we deemed consideration of within patient effects as not only impractical because of the large number of patients, but also inappropriate because of intrapatient sample variability of temperature, ph, paco 2 and transfusion timing. descriptive presentations of continuous data are shown as median and iqr, and of proportions as percent. the primary variables are presented as percentage along with their 95% confidence intervals of the proportion. comparison of continuous variables used the kruskal-wallis test with dunn's procedure for pairwise comparisons. comparisons of proportions were evaluated using the chi-square test, with maracuilo's procedure for pairwise comparisons. the impact of maturity on each of the three oxemic category parameters was tested by including maturity-category with spo 2 , as independent variables, in a logistic regression equation with oxemic risk or chance as the dependent variable. for the exploratory analysis of the effect of postnatal age, we added age to this logistic regression model. a two-tailed p < 0.05 was considered statistically significant for all comparisons. statistical tests were conducted with xlstat v19.02 (addinsoft, paris, france). our data included 26,162 spo 2 -pao 2 observations of infants receiving supplemental oxygen and respiratory support over a 3-year period. figure 1 provides a graphic overview of the risk of hypoxemia and hyperoxemia across spo 2 levels between 75 and 100%. the risk of each rises dramatically as spo 2 moves from a nominal target range. even when moving within the latter the trade off between hypoxemia and hyperoxemia is obvious. it is also of note that the difference in risk of severe hypoxemia and a pao 2 < 50 mmhg, is much larger than the difference between severe hyperoxemia and a pao 2 > 80 mmhg. for analysis these observations were divided into three groups according to post-menstrual age (pma). details characterizing the 3 groups are shown in table 1 . there were 2603 observations from 155 infants less than 33 weeks pma, 2501 observations from 192 infants between 33 and 36 weeks pma and 21,058 observations from 1031 infants greater than 36 weeks pma. the number of observations per infant was similar among the three groups. the gestational age and postmenstrual age were consistent with the 3 maturity categories. the median spo 2 and pao 2 levels were lower in the group less than 33 weeks pma. this group also included a higher share of measurements in normoxemia and less in severe hyperoxemia. the chance of normoxemia was dependent on spo 2 (p < 0.001) but not pma. the chance of normoxemia across the range of 88-95% spo 2 was 65% (64-67 95% ci). the actual chance of normoxemia for 4 different overlapping spo 2 target ranges are shown in table 2 , and were different, specifically slightly lower in the lower ranges (p < 0.001). the pao 2 levels for each are also shown in the table and the differences between them are statistically significant (p < 0.001). higher target ranges increase the possibility of higher the risk of hypoxemia (pao 2 < 50 and < 41 mmhg) was independent of pma but not spo 2 (p < 0.001). the risks at different potential alarm levels are shown in table 3 . the risks are not different at settings of 89, 88, and 87% spo 2 for either pao 2 < 50 mmhg or < 41 mmhg. they were both markedly higher at 86 and 85% spo 2 . (p < 0.01) at these levels the risk of severe hypoxemia (< 41 mmhg) was marked; at 86% spo 2 (risk: 20% (16-24, 95% ci)) and at 85% spo 2 (risk: 25% (21-29, 95% ci)). the changes in risks are consistent with the changes in the pao 2 also shown in the table. the variation (interquartile range) of pao 2 levels is similar. the risk of hyperoxemia (pao 2 > 80 and > 99 mmhg) was significantly different among the 3 pma categories (p < 0.001) and within each category among the spo 2 levels (p < 0.001). the actual risks at different potential alarm levels are shown in table 4 for each maturity category. the potential point of marked increase in the risk of a pao 2 > 80 and > 99 mmhg were different for the three maturity categories. with regard to severe hyperoxemia, for those < 33 weeks it was a reading of 98% spo 2 (risk: 25% (18-33, 95% ci)), which was significantly higher than at 95 and 96% spo 2 (p < 0.05). it was a spo 2 reading of 97% for those 33-36 weeks (risk: 20% (14-25%, 95% ci)), which was not significantly higher than 95 and 96%. a reading of 96% for those > 36 weeks (20% risk: (17-22, 95% ci)), and the difference between all pairs was statistically significant (p < 0.001). a point of demarcation for the risks of pao 2 > 80 mmhg is 1 spo2 level lower for each of the 3 pma categories. the changes in risks are consistent with the changes in the pao 2 levels also shown in the table. the variation (interquartile range) of pao 2 levels is similar except at 98% spo 2 , which is wider. our exploratory analysis determined that postnatal age was an independent predictor of chance of normoxemia (p < 0.001) and risk of severe hyperoxemia (p < 0.001), but not severe hypoxemia. with increasing age the chance of normoxemia increased while the risk of hyperoxemia decreased. however the size of the effect predicted by the regression equation was quite small; that is changes of + 0.7% (normoxemia) and − 0.6% (severe hyperoxemia) for each week of age. we evaluated a large database of neonatal spo 2 -pao 2 observations paired with infant postmenstrual age. our aim was to provide additional guidance to support the selection of spo 2 alarm levels and target ranges for neonates receiving supplemental oxygen. we identified a spo 2 range consistent with normoxemia, and showed how a target range could shift depending on a preference for avoiding higher or lower levels of pao 2 . we showed that the risk of hyperoxemia and hypoxemia increases exponentially as spo 2 moves toward extremes. we found that the risk of severe hypoxemia does not become marked until a level well below common low alarm settings. finally we found that the risk of severe hyperoxemia becomes marked at different levels depending on postmenstrual age and importantly at thresholds not consistent with standard practices. this report is, to our knowledge, the first to document these perspectives. we evaluated four overlapping target ranges, each 4 wide with mid points of 90, 91, 92, and 93% spo 2 . our data showed that there was a similar chance of normoxemia across these potential target ranges, but slightly favoring the higher target ranges. this consistency also suggests that a wider target range, even 88-95% spo 2 , would maintain a similar chance of normoxemia, but could be easier to maintain. a wider range at the low end has been suggested for extremely preterm infants [10, 11] , in contrast to the european guidelines that recommend a higher target range [5] . two recent reports of practices in europe and the us reported that most target ranges were within this wider envelop, though more often narrower than seven but rarely 4 or less [1, 8] . our analysis did not identify an effect related to maturity associated with normoxemia as we had expected. however our hypothesis was based on risk data of extreme pao 2 levels (< 41 and > 99 mmhg) at spo 2 levels between 90 and 95%, which is different from our normoxemia criteria (pao 2 50-80 mmhg). further the information about likely pao 2 values, consideration of which might align with maturity, ought to be useful in selecting a target range within these boundaries [11] . a clinical aversion to higher or lower pao 2 levels is reasonable. the consideration of a trade off of high and low oxygen exposure is supported by a landmark evaluation comparing the long term outcomes of nearly 5000 extremely preterm infants randomized to one of two spo 2 target ranges (85-89% or 91-95%) [9] . it found the high range was associated with increases in severe retinopathy of prematurity and more likely need for supplemental oxygen at 36 weeks pma, but lower levels of necrotizing enterocolitis and death. alarm fatigue in the nicu is a serious problem. pulse oximetry, while an essential tool, generates the most false alarms and is the alarm least likely to be associated with an actionable nursing intervention [2, 3, 15] . it is not uncommon with unstable infants to experience a spo 2 alarm every few minutes, while an intervention is often only warranted every 5-10 min. faced with this dilemma nurses have been shown to disregard alarm policy [1] . attention to selection of reasonable alarm settings (delay, and level) as well as sensor/probe integrity, can impact the frequency of alarms not needing intervention [16, 17] . however setting alarms, whether by policy or practice, to avoid excessive frequency must also consider the risk of missing or delaying response to important events. policy and practice must balance the need to find an acceptable medium to balance the risks associated with each. our data provide spo 2 thresholds that are associated with marked hyperoxemia and hypoxemia. it is reasonable to consider a buffer zone between the alarm setting and the level of spo 2 concern. in addition, many events are short and it is standard practice to set the alarm delay to avoid these transient events not needing intervention. correspondingly it seems appropriate to set a longer alarm delay when the buffer zone is wider. our data indicate that the risk of hypoxemia is not related to maturity and is not marked until the spo 2 is at 86% or 85%, at which point the risk is increasing exponentially. in contrast we found no relevant difference in risk at levels between 87 and 89%. setting the low alarm between 87 and 89% spo 2 would create a buffer but at the expense of increased false alarms and alarm fatigue, without a compensating longer alarm delay. a recent analysis has determined that episodes that are significantly lower (< 80% spo 2 ) and prolonged (> 60 s) are related to bad outcomes [18] . however, we speculate that episodes of spo 2 with a nadir between 87 and 89% even if prolonged, would not have a clinical impact, because of the low risk of severe hypoxemia. finally, based on an audit of extremely preterm infants in 83 nicus, hagadorn et al. reported good compliance with low spo 2 alarm unit guidelines, but provided no related details on the actual settings [1] . in preterm infants we found the risk of hyperoxemia did not become marked until spo 2 reached 97-98% in those < 33 weeks pma and those 33-36 weeks pma. this is higher than the most recent recommendations for setting the high spo 2 alarm around 95% in extremely preterm infants [5, 7, 10] . such a lower setting could be appropriate with two difference rationales. it could be considered an appropriate buffer zone. but it certainly would increase false positive alarms, without a compensating longer alarm delay. it might also be appropriate if the goal was to avoid pao 2 levels approaching 80 mmhg, in alignment with a lower target range. consistent with this likely excessive false positive rate from tighter high alarms, hagadorn reported only 63% compliance with high spo 2 alarm unit guidelines [1] . in contrast to preterm infants, we found that the risk of hyperoxemia, pao 2 > 80 and > 99 mmhg, in infants > 36 weeks pma was marked at a spo 2 of 96%. while reports of guidelines are sparse [19, 20] , it is our impression that upper alarms for near term populations are often set much higher than 96%. this practice provides no buffer zone and certainly increases false negatives that could increase clinical risk of hyperoxemia. the concern about the risks associated with hyperoxemia in near term infants is less prevalent than in preterms. nevertheless, hyperoxemia in children and adults has been associated with morbidity and mortality [21, 22] and it is reasonable to project these risks to near term infants. the shift of the oxy-hemoglobin dissociation curve with increasing maturity that one would anticipate, was evident in high levels of spo 2 but not at moderate and low levels. while the predicted shift in the sao 2 -pao 2 relationship is characterized in a shift of p50, it is understandable that the smaller predicted shifts in spo 2 at lower levels would be muted. the lack of precision and bias of the pulse oximeter, especially in these ranges, as well as other factors such as local perfusion are documented [23] . the transition from fetal to adult hemoglobin is quite predicable over a couple months of life in healthy neonates, but we did not identify a meaningful impact associated with postnatal age. however the transition from fetal hemoglobin is affected by treatment and disease severity. transfusions have a marked effect [24] [25] [26] . our study population, all transferred for a higher level of care, commonly were transfused. accordingly, transfusion naive infants would be shifted more to the left [14] . such a shift would reduce the risk of hyperoxemia. this study's design has several limitations. first the pao 2 thresholds we used for hypoxemia, normoxemia and hyperoxemia, while generally accepted, have not been validated with regard to outcome risk. it is unlikely they ever will be. there is a need for and a growing body of data correlating spo 2 exposure and outcomes. of particular interest is a pending analysis of the impact of the actual, rather than assigned, spo 2 exposure in the neoprom population [9] . we speculate that these interpretations will be easier with a better understanding of the relationship between pao 2 and spo 2 . other factors such as small for gestational age and hemoglobin level as well as cerebral and intestinal oxygenation are also relevant. second, the study is observational. the location of the spo 2 sensor and site of arterial sampling were not controlled. it is likely that some of the paired comparisons do not reflect pre-ductal assessment. this could increase the variance, but we do not think this would have a relevant effect on the bias of the risk (median values). third, we categorized the hyperoxemic risk into three pma groups. these are reasonable groupings, but it is probable that the effect is somewhat continuous with increasing maturity, but certainly not strictly categorical. whether using these results to design research or to evaluate unit guidelines, several generalizability issues should be considered. the first is comparabilty to our study population. our unit is referral based, with all infants transferred in for tertiary care. after intervention and recovery infants are often returned when they only need low levels of inspired oxygen and minimal pressure support. as reported their supplemental oxygen requirements are quite high. also previously noted, as a result of transfusions, their oxy-hemoglobin relationship is shifted to right. illustrative of this, in our least mature cohort we identified an incidence of severe hyperoxemia more than 10 times higher than that reported in a more traditional inborn population during the first week of life [27] . another important consideration is the averaging and alarm delay settings on the oximeter. one large study confirmed the clinical relevance of these settings [28] . they documented a marked decrease in the incidence of severe hypoxemic events with increasing averaging time, and also demonstrated that it was associated with increased duration of episodes. they recommended using shorter averaging times and longer delays. finally the oximeter measurement itself must be considered. our data reflect a good bit of scatter in the pao 2 at each spo 2 level. sources of the scatter seen with spo 2 monitoring are well described [13, 29] . .consideration of differences in oximeter brands, and models should be considered as well. our group previously reported no difference in bias between the massimo and nellcor devices across the range of saturations in the picu, but did identify a problem with the use of inappropriate sensors [23] . of more potential relevance, a difference between the massimo and nellcor oximeters has been reported in the spo 2 range of 87-90% [30] . while this difference is within the device's 3% accuracy specifications, it might well effect a decision about selecting a lower target range, or the low spo 2 alarm setting. we provide quantification of the rate at which the risk of hyperoxemia and hypoxemia increase exponentially as spo 2 moves towards extremes, and how it is affected by maturity. postmenstrual age influences the threshold at which the risk of hyperoxemia became pronounced, but pma did not alter the threshold for hypoxemia or normoxemia. the thresholds at which a marked change in the risk of hyperoxemia and hypoxemia occur can be used to guide the setting of alarm thresholds. these findings support reconsideration of common alarm treshold practices. in extreme preterm infants, but not in more mature infants, high spo 2 alarms may be set higher than 96%. likewise low spo 2 alarms may be set lower than 89%. spo 2 targeting ranges may be selected within the range of 88-95% spo 2 . optimal management of neonatal oxygen saturation must take into account concerns of alarm fatigue, staffing levels, and fio 2 titration practices. integration of these factors should be evaluated in quality improvement programs. fio 2 : fraction of inspired oxygen; spo 2 : arterial oxygen saturation measured noninvasively; nicu: neonatal intensive care unit; pao 2 : arterial partial pressure of oxygen (mmhg); paco 2 : arterial partial pressure of carbon dioxide (mmhg); pma: post-menstrual age (weeks) alarm safety and oxygen saturation targets in the vermont oxford network inicq 2015 collaborative nurses' reactions to alarms in a neonatal intensive care unit balancing the tension between hyperoxia prevention and alarm fatigue in the nicu alarm safety and alarm fatigue european consensus guidelines on the management of neonatal respiratory distress syndrome in preterm infants-2019 update retrospective analysis of pulse oximeter alarm settings in an intensive care unit patient population committee on fetus and newborn. oxygen targeting in extremely low birth weight infants pulse oximetry saturation target for preterm infants: a survey among european neonatal intensive care units association between oxygen saturation targeting and death or disability in extremely preterm infants in the neonatal oxygenation prospective meta-analysis collaboration safe oxygen saturation targeting and monitoring in preterm infants: can we avoid hypoxia and hyperoxia? graded oxygen saturation targets and retinopathy of prematurity in extremely preterm infants pulse oximetry targets in extremely premature infants and associated mortality: one-size may not fit all oxygen saturation targeting by pulse oximetry in the extremely low gestational age neonate: a quixotic quest hypoxemia and hyperoxemic likelihood in pulse oximetery ranges: nicu observational study video analysis of factors associated with response time to physiologic monitor alarms in a children's hospital evaluation of two spo2 alarm strategies during automated fio2 control in the nicu: a randomized crossover study reducing alarm fatigue in two neonatal intensive care units through a quality improvement collaboration association between intermittent hypoxemia or bradycardia and late death or disability in extremely preterm infants practical recommendations for oxygen saturation targets for newborns cared for in neonatal units. new zealand: newborn clinical network clinical reference group monitoring of oxygen saturation levels in the newborn in midwifery setting admission hyperoxia is a risk factor for mortality in pediatric intensive care oxygen exposure resulting in arterial oxygen tensions above the protocol goal was associated with worse clinical outcomes in acute respiratory distress syndrome accuracy of pulse oximetry in children the reactivation of fetal hemoglobin synthesis during anemia of prematurity the effect of blood transfusion on the hemoglobin oxygen dissociation curve of very early preterm infants during the first week of life effects of fetal hemoglobin on accurate measurements of oxygen saturation in neonates arterial oxygen tension (pao2) values in infants <29 weeks of gestation at currently targeted saturations alarms, oxygen saturations, and spo2 averaging time in the nicu oxygen targeting in preterm infants: a physiological interpretation oxygen targeting in preterm infants using the masimo set radical pulse oximeter publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations none. authors' contributions tb was responsible for the conception of the study, the data analysis and initial draft of the manuscript. cn and ni collected the data. the authors (tb, ni, cn, pr, rk) critically reviewed and approved the manuscript and agree to be accountable for all aspects of the project. there was no funding provided to support the planning, implementation, analysis or manuscript development. the data sets generated and analyzed during this study are not currently publically available, but are available from the corresponding author on reasonable request. the bioethics review organization at children's hospital los angeles (chla-17-00236) has waived the need for informed consent for aggregate data analysis studies and specifically approved this project. not applicable. key: cord-021492-z2bjkl9g authors: brossman, charles title: planning for known and unknown risks date: 2016-04-15 journal: building a travel risk management program doi: 10.1016/b978-0-12-801925-2.00001-1 sha: doc_id: 21492 cord_uid: z2bjkl9g this chapter covers standard definitions of duty of care, example case law where employer duty of care was applicable, a variety of sample risks and concerns that employers and travelers should be aware of, in context with a travel risk management program. legal duty of care-definition 1 "duty of care" stands for the principle that directors and officers of a corporation in making all decisions in their capacities as corporate fiduciaries, must act in the same manner as would a reasonably prudent person in their position. courts will generally adjudge lawsuits against director and officer actions to meet the duty of care, under the business judgment rule. the business judgment rule stands for the principle that courts will not second guess the business judgment of corporate managers and will find the duty of care has been met so long as the fiduciary executed a reasonably informed, good faith, rational judgment without the presence of a conflict of interest. the burden of proof lies with the plaintiff to prove that this standard has not been met. if the plaintiff meets the burden, the defendant fiduciary can still meet the duty of care by showing entire fairness, meaning that both a fair process was used to reach the decision and that the decision produced a substantively fair outcome for the corporation's shareholders. ijet international defines "duty of care" specific to trm as follows: 2 duty of care: this is the legal responsibility of an organization to do everything "reasonably practical" to protect the health and safety of employees. though interpretation of this language will likely vary with the degree of risk, this obligation exposes an organization to liability if a traveler suffers harm. some of the specific elements encompassed by duty of care include: • a safe working environment-this extends to hotels, airlines, rental cars, etc. • providing information and instruction on potential hazards and supervision in safe work (in this case, travel) • monitoring the health and safety of employees and keeping good records • employment of qualified persons to provide health and safety advice • relative to "duty of care" is the "standard of care" that companies are compared to in defending what is "reasonable best efforts" or "reasonably practical," based upon what resources and programs are put into place by an organization's peers to keep travelers safe. prior to 2001, business travelers thought nothing of being able to walk into an airport and meet their loved ones at their arrival gate. no security barriers, no cause for concern because air travel was something that at the time, our collective psyche felt generally safe, with the exception of a hijacking upon occasion. fast forward to a post-9/11 world, and consider what the world's airports look like now and how the processes surrounding airport security have changed the way that we travel, whether for business or pleasure. why would any of us believe that the need for added security, particularly around those traveling for business, begins and ends at the airport? for companies who have been paying attention since 9/11, the ones who, outside of the public eye, have had to deal with critical incidents that had the potential for loss of lives, corporate liability, and damage to their company's reputation, having a structured trm program not only reduced the potential for risk, but heightened the awareness of risk to their travelers. their definition of "travelers" extended beyond employees (transient travelers to expatriates) to contractors, subcontractors, and dependents. keeping travelers aware of imminent dangers takes effort and planning, and isn't something that employers can any longer react to after the fact. in some countries, lack of planning or resources to support business travelers has the potential to be grounds for claims of negligence in a company's duty of care responsibilities, and can lead to a criminal offense, such as with the united kingdom's (uk) corporate manslaughter and corporate homicide act of 2007. what the "business judgment rule" in the above duty of care definition means in layman's terms is that a company must be able to prove that it put forth reasonable best efforts to keep its travelers safe. how this applies in different circumstances, jurisdictions and countries will vary. most countries' duty of care requirements fall under their occupational safety and health laws. for a comprehensive list of occupational health and safety legislation by country, an updated global database is maintained by the international labour organization (www.ilo.org 3 ). simply put, companies cannot afford to no longer have a proactive trm program and just react after an incident takes place. the end result could reflect negligence on behalf of the company. for extensive detail on the uk's definition of duty of care in relation to the corporate manslaughter and corporate homicide act of 2007, visit http://www.legislation.gov.uk/ukpga/2007/19. because each of the 50 u.s. states is a separate sovereign free to develop its own tort law under the tenth amendment, there are several tests to consider for finding a duty of care under u.s. tort law, in the absence of a federal law. tests include: • foreseeability-in some states, the only test is whether the harm to the plaintiff that resulted from the defendant's actions was foreseeable. • multifactor test-california has developed a complex balancing test consisting of multiple factors that must be carefully weighed against one another to determine whether a duty of care exists in a negligence action. california civil code section 1714 imposes a general duty of ordinary care, which by default requires all persons to take "reasonable measures" to prevent harm to others. in the 1968 case of rowland v. christian (after and based on this case, the majority of states adopted this or similar standards), the court held that judicial exceptions to this general duty of care should only be created if clearly justified based on the following public-policy factors: • the foreseeability of harm to the injured party; • the degree of certainty that he or she suffered injury; • the closeness of the connection between the defendant's conduct and the injury suffered; • the moral blame attached to the defendant's conduct; • the policy of preventing future harm; • the extent of the burden to the defendant and the consequences to the community of imposing a duty of care with resulting liability for breach; and the availability, cost, and prevalence of insurance for the risk involved; • the social utility of the defendant's conduct from which the injury arose. pioneering companies (often in the energy services sector or government contractors) who were some of the first to adopt and implement forward-thinking programs, recognized early on that a critical incident or "crisis," isn't usually defined as an event impacting large numbers of people. they found that the largest percentages of incidents that required support, involved individual travelers or small groups. so while policies, plans, and readiness exercises are good to have in place for those highly visible incidents impacting large numbers of people, if handled improperly, the smaller incidents can cost companies considerably in damages and litigation costs, should their travelers or their travelers' surviving families prove that the companies in question weren't properly prepared to handle such incidents as they arise. case study-u.s. workers compensation and arbitration khan v. parsons global services, ltd united states court of appeals, district of columbia circuit-decided april 11, 2008 (https://www.cadc.uscourts.gov/internet/opinions.nsf/8dd6474d9dd 96bce85257800004f879d/$file/07-7059-1110404.pdf) • during the course of employment in the philippines, on a day off, mr. khan was kidnapped and subsequently tortured. • employment contract included a broadly worded arbitration clause, and a separate clause specifying "workers compensation insurance" as "full and exclusive compensation for any compensable bodily injury" should damages be sought. • allegations that employer's disregard for mr. khan's safety in favor of minimizing future corporate kidnappings considering the way parsons handled the situation provoked mr. khan's kidnappers to torture him, cutting of a piece of his ear, sending a video tape of the incident to the employer, causing the khans severe mental distress. • mrs. khan alleged efforts by the employer to prevent her from privately paying the ransom, despite threats of torture, may have exposed mrs. khan to guilt of knowing that she could have prevented mr. khan's suffering if the employer had not withheld the ransom details from her. • mr. and mrs. khan filed a lawsuit for parsons' alleged mishandling of ransom demands by the kidnappers, and also alleging negligence and intentional infliction of emotional distress in d.c. superior court in 2003. the employer removed the case to the federal district court, arguing on the merits of the new york convention for the recognition and enforcement of foreign arbitral awards, and then filed a single motion to dismiss or, as an alternative, to obtain summary judgment to compel arbitration. the employer initially received a summary judgment to compel arbitration. • upon appeal, this judgment was reversed. the court found that the recovery of the khans' tort claims were not limited by mr. khan's contract to workers' compensation insurance. • an additional appeal contended that the initial summary judgment granted by the court denied the khan's discovery requests, and dismissed mrs. khan's claim for intentional infliction of emotional distress • through the appeals process, the court found that the employer had in effect waived their right to arbitration. this case study calls into question legal jurisdiction, u.s. workers' compensation liability limitations for employers, and the value of being prepared for such an incident as kidnapping. this chapter outlines at a high level general categories that all companies must take into consideration when developing a trm program. very often the question is asked, "do i really need to do any of this, because our company hasn't been sued to date?" if you have employees or contractors traveling on your behalf (especially internationally), whereby your company is paying for their time and/or expenses, then the answer is absolutely yes. the level of investment and complexity may vary between companies, but in general, all companies must have a plan for how to address the issues provided herein and others. duty of care is never finite in its definition because companies must consider how laws from one country to the next will apply to travelers, contractors, potential subcontractors, and expatriates and their dependents, as well as any potential for conflict of law. also, as shown in the khan v. parsons global services, ltd. case study listed earlier in this chapter, employer remedies such as worker's compensation insurance in the u.s. aren't absolute; and therefore, warrants additional efforts and protections. consider the following incident types or risk exposures, which in some instances can impact large numbers of travelers, but more commonly impact only one person. according to the u.s. department of commerce international trade administration, only 10 percent of international business travelers receive pretravel health care. pretravel health care can include, but is not limited to things like new or updates to vaccinations or inoculations, general health exams, medical treatment or procedures for a condition that may be risky to travel with, or prescription medicine planning for travel lasting for extended periods (longer than 30 days). the chief operating officer at ijet, john rose, comments that, "a percentage of calls into our crisis response center are for minor, individual medical issues." however, callers may not always know that the situation is minor until they reach someone for support, which is why having an easy-to-identify, easy-to-access, single contact number or hotline for medical and security support is so important to all companies. a contracted crisis support service will know based upon predetermined protocols, which providers will support the traveler in the part of the world where they are traveling for medical issues, and ensure that the traveler gets the immediate advice that they need from a vetted medical professional. sometimes with a brief conversation with a nurse, the parties can determine a minor treatment that the traveler can facilitate, and in other circumstances a referral to a more senior medical official or emergency medical resource may be necessary based upon the initial consultation by the first-level medical support personnel contracted by the traveler's company. as discussed later in the book, who provides the crisis response case management and who provides the medical or security services specific to the traveler in question are not necessarily mutually exclusive. there could be different providers in different parts of the world, used for different reasons that are outlined in company policies and protocols. the consequences of mistakes as a result of a lack of preparation or resources can be costly, from financial loss and traveler productivity loss to the company, to a serious health issue for the traveler, or simply a ruined trip. while clarity via training and policies on who supports traveler medical issues should be very clear to everyone within an organization, the following common medical mistakes should be avoided where possible, as recommended by dr. sarah kohl, md of travelreadymd (http://www.travelreadymd.com): statistically, most medical problems you are likely to experience while traveling overseas cannot be prevented with a vaccine. for example, there are no vaccines for jet lag, diarrhea, blood clots, malaria, or viral infections such as dengue. before you travel overseas, make sure you are educated about these potential problems. most can be prevented with simple measures. information from different sources on the internet can be conflicting and can lead you to believe you need more interventions than actually necessary. as travelers prepare to depart, employers should provide them with access to resources that can advise on medical concerns relative to your destinations. of course, travelers should also discuss any personal medical condition concerns with their own or qualified medical professionals in addition to receiving employer provided risk intelligence regarding their trip. unfortunately, travelers regularly suffer needless medical complications because they fail to take simple steps to avoid predictable issues. simple precautions can save you a lot of discomfort and make your trip safer and more enjoyable. here are some examples: medical compression stockings, if properly fitted, can protect you from a life-threatening blood clot. knowing the right insect spray to choose, from the multitude of choices available, can protect you from insect-borne disease. avoiding seemingly harmless activities in certain locations (ones that a hotel concierge might even recommend) can protect you from parasites, respiratory illness or malaria. travelers often fail to recognize how a common illness such as diarrhea or a respiratory infection can cause a flare-up of an underlying condition. travelers who are good at managing food allergies, asthma, and diabetes at home may experience difficulty finding the resources they need overseas. in addition, these individuals may find themselves looking to a non-english-speaking doctor for help. measles, tuberculosis, and other infections are gaining a foothold in some european countries. low immunization rates within these communities are thought to be the root cause. don't risk becoming ill or bringing an infection home. check with your health care provider before you travel to discuss preventive measures. if you have a chronic health problem that is well under control, you will want to be prepared to self-treat under certain conditions. you may also want to be prepared to access a network of doctors who speak your native language, if needed. lastly, travelers should never assume that a pre-existing condition is covered by corporate-or consumer-based travel insurance or medical membership programs. when in doubt, always ask your human resources department or trm program administrator. companies commonly expect that corporate insurance policies or business travel accident (bta) policies provide enough coverage for travelers, when sometimes they may not. this is why protocols and regular training exercises for internal risk program stakeholders take place, to understand what is covered and what is not, as well as how to handle each situation. whether insured or not, consider the value and cost savings of prevention based treatment as shown in the examples provided below. consider the possibility that anything that an employee or representative comes in contact with during the course of a business trip (during or after hours) that can potentially make them ill or kill them is a liability to the employer. biological hazards or biohazards are pathogens that pose a threat to the health of a living organism, which can include medical waste, microorganisms, viruses, or toxins. toxicity is the degree to which a substance can damage an organism (not exclusively biological, as it could be chemical). brett vollus, a former qantas airline employee of 27 years, filed suit against the airline claiming that his spraying of government-mandated insecticides on planes to prevent the spread of insect-related diseases like malaria, caused him to develop parkinson disease after 17 years of administering the chemicals in the flight cabins. it was also discovered from a brain scan after a tripping incident that vollus had a malignant brain tumor. considering this was a government mandate, it will be interesting to see if the question becomes: what did the government know about the risks of these chemicals? if a precedent is set in this suit, will liability extend to other airlines using or who have used such chemicals for extended periods, against repeat business travelers who regularly flew or fly in markets where such spraying was or is common practice over a long period of time? epidemics are outbreaks of disease that far exceed expected population exposures during a defined period of time. epidemics are usually restricted to a specific area, as opposed to pandemics that cover multiple countries or continents. mature trm programs monitor these more visible outbreaks and recommend vaccinations for travelers going to impacted areas; they also provide access to emergency medical resources when necessary, but also have a large focus on education, training, and prevention. however, employers should always be mindful of other environmental factors in the traveler's workplace both at home or abroad, such as urban or rural environmental factors. examples may include prolonged exposure to pollution, lack of sanitation (particularly when it comes to their expat communities). employers should work towards limiting those exposures or changing the environment through continuous process improvement reviews. according to major medical and security evacuations suppliers, corporate-sponsored evacuations involving one or more travelers happen almost every day when you include both medical and security-related evacuations. it is a mistake to think that just because a case study or example is slightly dated, the instances they represent occur infrequently. it's quite the opposite. however, most incidents are not publicly documented to the degree that they can be reported upon. the five primary things that companies must be concerned with when facing a pandemic situation are: 1. the potential impact on personnel. 2. the pandemic, crisis response plan. 3. the potential impact on business operations. 4. the potential impact on business supply chain. 5. the potential impact to share value or price. what many companies don't consider is the potential for shareholder lawsuits against executives for business losses resulting from a lack of planning for situations such as pandemics. from shared sick time policies to work-at-home policies during is your organization pandemic ready? harvard's school of public health recently released survey data showing how deeply concerned u.s. businesses are about the possibility of widespread employee absenteeism that might follow an outbreak of the swine flu (h1n1). researchers from the school questioned more than 1000 businesses across the country. two-thirds of companies said they couldn't operate normally if more than half of their workers were out for 2 weeks. and four of five organizations predicted severe operating problems if half of their workers missed a month of work. a crisis, being able to quickly communicate a position or a plan, and to answer questions in the event of such an emergency, can not only save money and productivity, but garner employee confidence and calm nerves. chapter 9 elaborates on the relationship between travel risk management (trm) and other aspects of risk management across the enterprise (erm-enterprise risk management). according to the new zealand herald, 4 the country's largest company, fonterra, could lose $150 million because of the ebola epidemic. fonterra ceo, theo spierings, noted that when african countries lock down their borders to control the disease, demand dropped for fonterra's products. he commented, "so…movements in west africa become more and more difficult, so that limits movement of food as well, movement of people-people going to the market, doing their groceries-so you see demand really dropping pretty fast." "if the market in west africa slowed down or dropped off that would affect 100,000 tonnes of powder," mr. spierings said. "that's about 5 percent, 6 percent of our exports. so you talk…$150 million or something like that." these survey results should encourage all organizations to prepare for the worst by developing a crisis management plan. in addition to ample warning, senior management has ample reason to prepare, and no excuse not to. an organization's executives won't be blamed for the outbreak, but they do risk censure if they fail to prepare, respond, and communicate with internal and external stakeholders. this white paper tells how. to help organizations and their leaders prepare for a possible h1n1 pandemic, certain key issues must be addressed to keep operations running as smoothly as possible: • human resource (hr) issues that drive pandemic planning. • planning for steps necessary to keep an organization operating during the pandemic period. • implementing steps needed to create an enterprise-wide crisis management plan. • internal and external issues that crisis communications must address. why bother planning for the h1n1 pandemic? to put it simply, companies and organizations that plan for any type of crisis demonstrate the behavior of responsible citizens. formulating a detailed crisis management plan specifically for h1n1 achieves four things: 1. protects employees' health and safety. 2. lessens the chance of a major interruption of your daily business. 3. protects your company's or your brand's reputation. 4. allows daily business activity to continue with minimal disruption if you are affected. companies must establish open lines of communication with all audiences while dealing with the effects of the pandemic or other significant events. should one occur, these stakeholders will want to know what you are doing to manage the situation and minimize their risks. if you communicate with these stakeholders openly and promptly, you send four valuable messages: • you are taking charge of the situation. • you take it seriously. • you have the best interests of your staff and customers at heart. • you run a responsible company with nothing to hide. pandemics have a disastrous effect on a company's optimal functioning because they prevent large numbers of critical employees from showing up for work. the resulting interruption to normal operations can have a disastrous cascading effect, affecting nearly every corner of the organization at considerable cost. employees unable to work or prevented from working become anxious and insecure. when they start asking management questions that aren't answered sufficiently or quickly, it exposes the fact that management hasn't developed contingency plans or that management failed to consider what employees need to know. part of the cost of failing to prepare can be measured by the resultant loss of trust in management's capability, judgment, and credibility. we know from experience there are certain predictable questions that employees will ask and hr departments must be prepared to answer. for example: hr departments should, as a matter of urgency, review attendance and sickday policies to ensure they have made allowances for managing the largerthan-normal issues h1n1 creates. some of the policies that will need to be considered for implementing or addressing include: 1. how/when to start monitoring/screening employees at the workplace to determine if they are sick or pose a risk. how/when sick employees should be sent home to protect colleagues at work or be stopped/prevented from coming to work where they could infect colleagues. 3. how/when the company should be temporarily closed due to the number of sick employees. 4. how/when to implement steps to minimize face-to-face contact at work. 5. how/when to allow certain employees, including senior management, to work remotely from home or another branch/office. 6. how/when employees should be allowed to stay at home to look after sick family members. 7. how/when the company's travel policies should be changed/suspended. 8. how/when to stop employees from coming into contact with suppliers and customers. 9. how/when to implement and enforce a "wash your hands" and "cover your mouth and nose when coughing and sneezing" policy; this must include making face masks and the use of hand sanitizers mandatory across the company. how/when to change the company payroll policy so that all employees receive electronic payments into their accounts; consider establishing an emergency "employee help" fund. 11. any and all extensions/additions to your existing payroll and work hours' policies. at the core of your h1n1 crisis plan, your hr department must be fully prepared to explain and communicate any new policies or changes to employees on an ongoing basis in all offices. this includes offices and employees that may not be affected by the pandemic at all. international and regional offices must also be briefed as they, too, could be directly impacted if there is an h1n1 outbreak. employees should also be asked for input and ideas. this may help to highlight potential management or operating aspects that have not been considered. it will also make employees feel part of the pandemic planning process and thus, more accepting of and cooperative with the final plan. if appropriate to your workplace and organizational culture, additional steps can be taken to protect employees by putting up educational posters, using training materials, and even arranging for annual flu shots (under doctor's supervision) to be provided in the workplace for convenience. employees should also be encouraged to learn and do more on their own and away from work. all of these actions send a message to employees that you are looking out for them, their jobs, and the company's well-being. in return, employees are much more likely to "go the extra mile" in order to lessen the business impact of widespread absences. communicating during a crisis is important, but what businesses do is always more important than what they say. making good decisions and providing straightforward, honest and factual information to all employees with frequent updates is one of the most critical actions management can take. ideally, all companies and organizations would have enterprise-wide crisis plans in place before a crisis breaks. but realistically, we know from multiple surveys that at least half don't. too many companies assume an "it can't happen to me" mentality or, in tough business or competitive conditions, they decide not to invest in "insurance" activities. unfortunately, some find out the hard way that you cannot choose your crisis; it chooses you-and almost always at the most inconvenient time. if yours is an organization that hasn't taken the steps necessary to implement crisis preparedness, here are some interim steps that you can take quickly to address h1n1. remember, the most effective and least costly way to manage a crisis is to prevent it from happening in the first place. you cannot stop h1n1, but you can take steps to keep it from damaging your operations, your reputation, and your bottom line. here's a quick checklist of things an organization can do, even at this late date: 1. appoint a pandemic coordinator or team. this individual or team will lead the organization through various steps to become pandemic-ready. have them first conduct a vulnerability and risk assessment. that means identifying areas in which you are at heightened risk of infection or in which your responses or ability to compensate will probably be weak. armed with this knowledge, you should be able to prepare for worst-case scenarios and begin planning accordingly. 3. get your crisis management team up to speed. a crisis management team consists of senior employees who will deal full time with a crisis while the rest of the organization runs as normally as possible. the most effective crisis teams typically consist of no more than five members who serve as its decision-making leadership. crises are not situations for committees or consensus building. they demand swift and certain decisions and actions be made under "battlefield conditions." we strongly recommend that you have a "five-star general" heading up your team. 4. a crisis management team must possess sufficient inherent or delegated power to command unrestricted access to a full cross-section of corporate disciplines, including hr, sales, customer service, information technology (it), security, operations, facilities management, communications, department/business unit headsfrom every corner of your organization. the crisis managers must know who from these disciplines are to be brought on to support the crisis management team on an as-needed "on-demand" basis. note that these disciplines are for advice and support, not crisis decision making. give them full authority to carry them out. 6. the team should also include someone who will be company spokesperson throughout the crisis. ideally, the spokesperson should be a senior company executive. he or she should have received formal media training, and should have the stamina, self-discipline, and inner strength to be able to convey trust and believability when speaking during a time when bad news may need to be delivered to various audiences. 7. think about including external experts on your team. these could include public health consultants, doctors, hr consultants, and business continuity experts. no organization can hope to be crisis-ready unless it is prepared with messaging ready to be disseminated to audiences on short notice and under pressure. crisis messaging typically consists of fully or partially (fill-in-the-blanks type) prepared statements addressing a range of potential situations anticipated in advance. prepared organizations keep them in a template format. then, as a crisis develops and the actual facts of the situation become known, the relevant template can be rapidly updated with all pertinent information. in a crisis, you simply do not have time to agonize for long over "what are we supposed to say?" remember, it is only during the first 60 minutes of a crisis that you have your one chance to take control of the situation via proactive communication. in that time, messages must be disseminated internally to staff and externally to the relevant audiences, such as customers, stockholders, suppliers, and partners, and possibly the media. businesses that conduct vulnerability and risk assessments will have a better idea of the templates and draft messaging they will need for a flu outbreak. these situations range from temporarily closing a site to announcing an interruption of service. the tone of all messaging must demonstrate that management is taking the situation seriously. employees are your first priority and must receive crisis-related messaging before anyone else. the media and relevant external stakeholders can then receive the same or similar messaging soon after. department heads in your company can be used to communicate directly with employees. employees should also be provided with messaging that they can share with others outside the organization. in today's "always-on" instantaneous online world, whatever employees are told invariably becomes public knowledge within minutes. from time to time, someone will ask a question that cannot be answered using prepared messaging. the crisis team must be prepared to reply "i don't know," and then either explain why, honestly and plainly, or commit to providing the answer at a given time in the future. nothing destroys trust and creates anger more than speculating or guessing at answers that may be proven wrong at a later stage. while you must respond quickly to all questions, you may not be able to answer them all. the crisis team must understand the difference. stakeholders want reassurance you are doing everything possible to manage the situation and communicating without a hidden agenda. if you intend to keep your business open and running during a significant event, say so. for credibility, communicate the steps that you are taking to ensure it is kept open. if you are asked questions and are uncertain about what will take place, acknowledge this honestly. make every effort to find the answer quickly and, when you have it, follow up as soon as possible. plan to work with third parties. adopting a go-it-alone attitude in dealing with a pandemic is needlessly dangerous. organizations are wise to be working with key third-party consultants to make crisis preparedness as robust as possible. key third parties could include: don't overlook your supply chains. companies providing each other with operations-critical products, goods, or services become inextricably linked. a problem in another company may cascade to yours, affecting your ability to meet contractual obligations. steps they take to stay in business may be beneficial or disruptive to you. knowing ahead of time will help you make appropriate arrangements or establish alternatives. cooperating with customers, partners, suppliers, and local governments helps you become pandemic-resilient. expert legal opinion must be obtained on how to address contractual obligations should a full scale pandemic break out. if you're prevented from delivering products or services and thus break legally binding contracts, customers/ partners could hold you liable for failing to plan adequately. such legal action could expand or precipitate a second crisis, when the media reports the legal action and you are forced to deal with a reputational crisis. during a pandemic, organizations must communicate effectively with all internal and external audiences. being ready to communicate proactively and at a moment's notice requires advance preparations. in all cases, employees are the most important communications targets during a crisis. friends and family will contact them along with many of their external business relationships (including the media) to ask "what's really going on?" and we know from experience that poorly briefed employees tend to speculate in the absence of solid information. this could easily precipitate a secondary crisis, forcing you to deal with rumor-mongering by employees and potentially false reporting by the media. either could cause serious damage. thus, you must designate in advance your primary or "official" internal communication channels, and let everyone in your organization know what they are. while face-to-face verbal communication is the best medium for internal audiences during a crisis, it may not be possible if h1n1 strikes. depending on your specific situation, one of the following channels should be considered in order to communicate companywide: remember: what is written and given to employees can be passed on to the media and other parties. communication with all external stakeholders must be timely and accurate, with messages consistent with what is being communicated internally. messaging differences should be determined by relevance to the receiver. but be safe: when in doubt, overcommunicate. in a crisis, everyone wants more information, not less. if you had to communicate with 100% of your customers within 60 minutes, could you? do you have up-to-date accurate contact information housed in databases that can support mass messaging such as blast e-mail or recorded voice messages with outbound autodialing? blast-fax? cell phone information for texting? nobody has time to build these contact databases once a crisis strikes. assemble them now. the best time to start communicating is when there is no crisis. a proactive information campaign could spearhead the opening of new channels of communication with your various external audiences prior to a crisis. the following external communication channels can be used proactively or reactively depending on the situation: while social media tools such as twitter, facebook, youtube, and blogs can play a role in crisis communication, at this time we believe they are not the tools best suited to be your primary or "official" communication channel to the outside world. especially for business organizations, social media are not yet universally accessible. but more importantly, they are not within your complete control. you must be extremely careful about what you say via social media, as it is very difficult to change anything after it has been sent out. it's the very nature of most crises that the situations and facts change, and change often. social media messages containing old information can too easily recirculate, causing misunderstandings and conflicts precisely at a time when they can do the most damage. a major h1n1 breakout could devastate supply-and-value chains, and possibly close down entire industry sectors. this will prevent companies from providing or delivering much needed services. customers, partners, suppliers, and employees will feel a significant impact. there will also be financial repercussions. in short, a business could be forced to close down if it is not ready for all eventualities. to be truly resilient in a crisis, the organization must have an up-to-date business continuity plan detailing how it will restore its operating functions, either totally or partially, within a certain period of time. to achieve this, key decision makers must: • have an in-depth look at their company to identify essential functions needed to keep doors open. nonessential ones can be temporarily discontinued without impacting day-to-day operations. people with key skills that are important to the business during the pandemic must be identified and protected whenever possible. those with nonessential skills may be told not to report for work during the pandemic. • consider contingency plans to switch operations to other sites, if possible. • identify alternative suppliers that you can switch to at a moment's notice. your primary suppliers of utilities, goods, products and services may suddenly shut down because of poor planning. you should ask current suppliers to disclose what contingency plans they have in place to ensure the provision of uninterrupted service to you. put backup plans in place to switch to other/competing suppliers and contractors if you're the least bit unsure of their preparedness. • determine if their it systems are sufficiently robust so critical technology-dependent business processes would still function. even though more than one billion people travel via commercial aircraft every year, illness as a direct result of air transportation isn't common; however, there are risk exposures associated with air travel that both employers and travelers should be cognizant of in order to mitigate the risks when possible. most modern aircraft are equipped with hepa (high efficiency particulate air) filters, which, according to the european air filter efficiency classification, can be any filter element that has between 85% and 99.9995% removal efficiency. according to pall corporation, for aircraft cabin recirculation systems, the definition has been tightened by the aerospace industry to a standard of 99.99% minimum removal efficiency. 5 most modern aircraft provide a total change of aircraft cabin air 20 to 30 times per hour, passing through these hepa filters, which trap dust particles, bacteria, fungi, and viruses. many airlines have an airflow mix of approximately 50% outside air, and 50% recirculated, filtered air whereby the environmental control systems circulate the air in a compartmentalized fashion by pushing air into the cabin from the ceiling area, and taking it in at the floor level from side to side, versus air movement from the front to back of the aircraft. however, most viral respiratory, infectious diseases, such as influenza and the common cold, are transmitted via droplets that are most commonly transmitted between passengers by sneezing or coughing. these droplets can typically only travel only a few feet this way. however, it is their survival rate once they land on seats, seatbelts, tray tables, and other parts of the passenger cabin that can provide additional exposure, which is why sanitation of your personal seating area when traveling, particularly your hands with an alcohol-based sanitizer before eating, is important. surgical masks have been shown to reduce the spread of influenza in combination with hand sanitization, particularly when worn and practiced by the infected individual. viral outbreaks in recent years of concern to business travelers have included middle east respiratory syndrome (mers), severe acute respiratory syndrome (sars), and ebola, h1n1 (swine flu), among others. the international air transport association (iata) has developed an "emergency response plan template" for air carriers during a public health emergency, which can be found at the following link: http://www.iata.org/whatwedo/safety/health/ documents/airlines-erp-checklist.pdf disinsection is the use of chemical insecticides on international flights for insect and disease control. international law allows disinsection and the world health organization (who) and the international civil aviation organization suggest methods for aircraft disinsection, which include spraying the aircraft cabin with an aerosolized insecticide while passengers are on board, or by treating aircraft interior surfaces with a residual insecticide when passengers are not on board. two countries, panama and american samoa, have adopted a third method for spraying aerosolized insecticide without passengers on board. not specific to just air travel, blood clots or dvt (deep vein thrombosis) can be a serious and potentially deadly health risk for any traveler with restricted mobility in an aircraft, car, bus, or train. anyone traveling for more than 4 hours without sufficient movement can be at risk. many blood clots are not necessarily visible and can go away on their own, but when a part of one breaks off, there is the possibility of it traveling to your lungs, creating a pulmonary embolism, which could be deadly. in addition to traveler training on prevention of dvt, companies should take this threat into consideration with regards to international class of service policies or reimbursement consideration for upgrades. according to the u.s. centers for disease control (cdc), the level of dvt risk depends on whether you have any other risks of blood clots in addition to immobility, as well as the length or duration of travel. the cdc also states that most people who develop blood clots have one or more other risks for them, such as: 6 • older age (risk increases after age 40 years) civil unrest generally takes place when a group of people in a specific location is angry, resulting in protests and violence. around the world, there are countless incidents of civil unrest that erupt, which can not only cause inconvenience and safety concerns for business travelers, but can also cause mental and emotional stress for which the employer is ultimately responsible to try to limit the effects of whenever possible, and to treat as early as possible after the incident is over. within the first 6 months of 2014, the world saw civil unrest and protests in turkey, brazil, ukraine, thailand, venezuela, malaysia, cambodia, india, egypt, hong kong, russia, china, and the united states (excluding military acts of war or civil war). in january of 2011, governments and private organizations from around the world began evacuating people from egypt due to civil unrest. approximately 50,000 americans lived and worked throughout egypt at the time, and approximately 2400 requested evacuation assistance from the u.s. government. such an exercise requires massive planning and resource availability, even for much smaller groups of people. consider the number of other companies competing for the same resources to evacuate their people, as well as the general public trying to leave. companies without a plan in place, along with proper strategic crisis response resources, would have been last in line to evacuate their impacted travelers and at greater risk for someone getting hurt or killed. at one time, civil unrest may have been considered primarily politically motivated, but today, there are many factors that lead to the spark that starts the fires of violence. things such as overpopulation, lack of food and resources, poverty versus wealth (income inequality), crime, lack of jobs and religious persecution, while sometimes related to political causes, are all reasons for the increased violence we see today. with the advent of mobile technology being increasingly available to the middle and lower classes of the world, it doesn't take much or long time-wise, to incite anger or hatred in others who can assemble quickly, sometimes before one has a chance to react. throughout the text of this book, readers should see a common theme about the importance of quality risk intelligence. the previous statement about violence breaking out before one can react, is a perfect example of how real, risk intelligence (not simply recycled news) can often predict these events as they are starting to come together and warn people in advance, so that companies and individuals can take steps to mitigate their exposure. in such examples, would employers and travelers want "cheap information" from a provider that primarily scrapes news wires on the internet, or qualified, vetted security analysts with thousands of sources? if a life depended on it, i'm confident that people would choose vetted intelligence. another way to understand the value of news versus intelligence is that "intelligence" is in effect "analysis + news + context + advice." experienced security analysts specializing in specific geographic areas and subject matter produce quality intelligence. climate change can also drive civil unrest with sea-level risings, damage to property, water shortages, and increased costs associated with lost productivity or infrastructure collapse. people simply go where the goods and the work are provided. when that is lost for various reasons over a large area, there can be mass migrations that sometimes see the intervention of military units to prevent border crossings and an unanticipated drain on other population's resources. property damage and serious violence in vietnam in may 2014, as a result of anti-chinese protests, was experienced not only by chinese businesses, but by other assets owned by companies from additional countries. some manufacturing experienced an interruption to production, causing between 4 percent and 16 percent decreases in company share prices. these figures and insight are intended to support business cases for companies to invest in not just products and programs to avoid business disruptions caused by civil unrest and other factors, but the time required to simply have plans in place to mitigate the risk. imagine being in a foreign country on business and getting pulled over on the road in your rental car by a local police officer. unaware of any laws that you may have broken, after a quick discussion with the officer, you realize that they are extorting you for a bribe and you simply don't have the cash or the training to respond to the situation properly. alternatively, a traveler arrives in foreign country via a commercial flight, carrying marketing collateral and merchandise to give away at a conference that they are attending. the local customs authorities misinterpret part of your merchandise, because the conference is being held in a deeply religious country with harsh laws regarding morality. not only does the traveler fear for their safety, the company doesn't want to cause an international incident, which can be difficult to clean up. does your company provide resources and training to travelers regarding how to handle themselves in such situations? women from western countries may still find it hard to believe how many places in the world where their personal safety, and possibly their lives, can depend upon the length of their skirt and sleeves, or the time of day that they are out and about, particularly without a male escort. in 2013, a woman from new york was found dead in turkey; a turkish man confessed to killing her after allegedly trying to kiss her. according to news reports, she was a first-time international traveler, an avid social media user, and was in constant contact with friends and family. it is reported that she wasn't off the beaten path or doing anything risky, simply taking photographs. sometimes just having some awareness training about your destination can save female travelers the potential for conflict or incident, such as holding one's purse in her lap or at her feet with a thick strap around her leg to secure it, or ensuring that luggage tags do not openly display addresses and have a cover that must be opened to reveal the information. according to joni morgan, director of analytic personnel at ijet international, "in some cultures, for instance, it's not appropriate for a woman to initiate a handshake." "in afghanistan, it's considered an insult to show the bottom of your shoe, so when crossing your legs, you want to be aware of that." 7 female road warriors are learning important skills that are notably helpful in all destinations, but in some more than others, additional care should be taken. indications of when to take additional care is an important part of pretrip travel intelligence provided by an employer's trm program, supported by a vetted travel risk intelligence provider. some considerations for female business travelers while traveling alone or even with peers on business include the following: 1. always plan your route before going anywhere. never leave your hotel or office without understanding where you are going and appropriate routes. travelers do not want to look lost in the street looking at maps or their mobile devices for directions. 2. use vetted taxis or ground transportation providers. make an attempt to prebook all transportation with providers that your company has preapproved, and have appropriate security policies and procedures in place, such as identifiable car numbers, driver identification, tracking, and electronic order confirmation. removing the potential for unfamiliar, unvetted ground transportation providers can drastically reduce the potential for assault or abduction. can purchase a device to block the outside view of the inside of their hotel room by assailants who have devices that enable broad visibility inside hotel rooms from the outside via peepholes. in the absence of such a device, place tape or a sticker over the inside peephole opening. 4. choose your hotels carefully. make it clear to your employer that you take safety seriously and that you expect safety considerations to have been taken into account when designating preferred hotels for employees to stay at. employers should be able to articulate what kinds of safety standards go into their preferred hotel selections, which form the basis for how different incidents can be mitigated or handled should an incident occur. 5. never stay at hotels or motels where the room door is exposed to the open air (outside). 6. try to not accept hotel rooms on the ground floor. being on a higher floor makes it more difficult for an assailant to get away or not be seen on surveillance cameras. 7. never tell anyone your room number verbally. if a hotel employee asks for it, provide them with it in writing and personally hand it to them. do not write it on a check and leave it unattended. you don't want someone in the area to overhear you providing this information verbally or to view it on your check. 8. alcohol consumption-never leave your drink unattended or out of your sight. a momentary distraction is an opportunity for someone to place drugs into your drink. also, never drink until intoxicated while on business and be mindful of locations where drinking alcohol may even be illegal. 9. emergency phone numbers-know the equivalent of 911 or the local emergency services phone number and your local consulate or embassy phone numbers and preprogram them into your mobile phone, in addition to your company's provided crisis response hotline. whichever number you are instructed to call first according to your company's policies (if your company provides a crisis hotline), having those numbers handy can save your life when moments count. 10. never tell anyone that you are traveling alone. avoid solitary situations. try to remain in social situations where plenty of people are around. if you feel uncomfortable, leave. 11. leave a tv or radio on when you leave your hotel room to provide the perception that someone is in the room. 12. never hesitate to ask security or someone to escort you to your room, and avoid exiting an elevator on your hotel room's floor when sharing the elevator with a man. if necessary, go back to the lobby level until more people get on the elevator or you can ride it alone. use valet parking. self-parking can often put individuals at risk of assault in unsupervised car parks or garages. 14. upon arrival at your hotel, take a hotel business card or postcard and keep it with you at all times. if ever you are away and need to return, and you either don't remember the address, or your driver doesn't know where it is, or you don't have a signal on your mobile device, you can use the card to provide address details (usually in the local language). 15. do not use door-hanging room service order forms (typically for breakfast), as they often note how many guests you are ordering for. 16. make sure you have adequate insurance. just because you are on a business trip, doesn't mean that your employer has obtained enough insurance or services to support you in the event that a crisis occurs. hopefully, employer-provided insurance and support services are adequate and have been effectively communicated, but don't travel for business without a thorough understanding of what kind of coverage and support you have. in particular, any medical coverage should guarantee advance payment to local service providers and not require travelers to pay for services and file for reimbursement upon their return home. most people don't have access to the many thousands of dollars that might be necessary to procure sufficient treatment and support. 17. travel with smart travel accessories. travel with a small, high-powered flashlight and one or more rubber door stops for the inside of your hotel room (be aware of the downside of using in case of a fire). 18. leave copies of your passport with someone at home who can easily get a copy to you if you need it. having a copy can expedite the replacement of a lost or stolen passport if needed. an honor killing is a homicide of a family member, typically by another family member, based upon the premise that the victim has brought dishonor or shame to the family, in such a way that violates religious and/or cultural beliefs. again, as with religious or cultural restrictions on modest clothing, honor killings are not exclusive to women, but within the cultures and countries where honor killings are more generally accepted, men are more commonly the sources or perpetrators of the revenge or honor killings, very often charged by the family to watch over and police female family member behavior, restricting or prohibiting things such as adultery, refusal to accept an arranged marriage, drinking alcohol, or homosexuality. honor killings are not exclusive to any one country or religious faith, because they are found in a broad scope of cultures, religions, and countries. although more common in places such as the middle east and asia, there have been documented cases of honor killings in the united states and europe. if honor killings were based largely on the premise of family honor, why would nonfamily members or business travelers need to be concerned? honor killings have been known to happen to nonfamily members in strict, culturally conservative countries. perceived inappropriate behavior, typically with a female member of a conservative family, could result in the killing of the female family member and the nonfamily suspect. such killings can even take place in broad daylight. in lahore, pakistan in 2014, one such incident occurred involving multiple participants while the police looked on. the victim killed for marrying a man that she loved without family consent. 8 often these crimes are hard to document or record because they are disguised as suicides or, in some latin american countries, as "crimes of passion." the united nations fund for population activities (unfpa) estimates that as many as 5000 women fall victim to honor killings each year. 9 article 57 of qatar's constitution states that it is a "duty of all" who resides in or enters the country to "abide by public order and morality, observe national traditions and established customs." this means that wearing clothing considered indecent or engaging in public behavior that is considered obscene is prohibited to all, including visitors. in qatar, the punishment could be a fine and up to 6 months in prison. with kissing or any kind of physical intimacy in public, as well as homosexuality, being outlawed under sharia law, all travelers to or via the middle east for business or tourism purposes (e.g., to attend the 2022 world cup), should take heed. the qatar islamic cultural centre has launched the "reflect your respect" social media campaign to promote and preserve qatar's culture and values. posters and leaflets advise visitors, "if you are in qatar, you are one of us. help preserve qatar's culture and values, please dress modestly in public places." while research finds no definition in qatar's article 57 for modest clothing, campaigns such as this suggest that people cover up from their shoulders to their knees and avoid wearing leggings. they are not considered pants or modest dress. an example of the campaign leaflet can be found in "qatar launches campaign for 'modest' dress code for tourists" published by the independent (uk newspaper). 10 modest dress applies to both men and women. of course, strict laws, preferences or rules regarding dress expectations for women are not exclusive to any one country. http://www.pewresearch.org/fact-tank/2014/01/08/ what-is-appropriate-attire-for-women-in-muslim-countries/. while each employer may have specific approaches to handling an incident such as sexual assault, there must be a defined process for reporting such an event that involves crisis response resources that can intervene and provide advice on how to handle the situation with local authorities, perhaps first by contacting diplomatic contacts before contacting the police. facing local authorities alone in a foreign country for such a sensitive issue as sexual assault can be daunting and intimidating 8 nbc news, "family stones pakistani woman to death in 'honor killing' outside court," may 27, 2014, http://www.nbcnews.com/news/world/family-stones-pakistani-woman-death-honor-killing-outsidecourt-n115336. 9 united nations, resources for speakers on global issues, "violence against women and girls: ending violence against women and girls," http://www.un.org/en/globalissues/briefingpapers/endviol/. 10 lizzie dearden, "qatar launches campaign for 'modest' dress code for tourists," independent, may 27, 2014, http://www.independent.co.uk/news/world/middle-east/qatar-launches-campaign-for-modest-dresscode-for-tourists-9438452.html. without a company or diplomatic representative being there to assist. crisis response suppliers should be equipped with necessary contacts, recommended protocols, and resources to help the victim and employer to address the situation and get help as soon as possible. this is another good example of why employers should have a single global crisis response hotline for any crisis that a traveler may encounter while on business travel. sexual harassment can happen anywhere. what happens if you require a traveler to use a supplier per the company's travel policy, and a representative of that supplier sexually harasses the traveler? in addition to standard protocols within the workplace, considerations must be given to business travel, which from many perspectives today is an extension of the workplace. a hate crime is a criminal act of violence targeting people or property that is motivated by hatred or prejudice toward victims, typically as part of a group, based upon creed, race, gender, or sexual orientation. a critical component of any trm program is disclosure of potential risks to the traveler prior to taking a trip to a destination. in consideration of laws and cultural beliefs in select countries or regions that sanction the persecution, imprisonment or killing of members of the lgbt (lesbian, gay, bisexual, and transgender) community, specific races, religions, or sex (mainly women), travelers must be prepared a female business traveler, over the course of several months on a project, travels during the week, returning home on weekends. over time, a car rental clerk at the location she rented from weekly, began making comments to her about her appearance each time she checked-in or returned a car. eventually, the rental clerk began calling her mobile phone to share how he liked what she was wearing and began sending her text messages while she was in town, using the mobile number she provided at check-in. not responding and scared, the traveler canceled all future reservations and books rental cars with another provider. shortly thereafter, the clerk began calling and texting her, asking why she canceled and when she would be coming back. a concerned colleague of the traveler brought the situation to the company's travel manager, who intervened with their human resources and legal departments to proactively address the situation with the authorities and the supplier, and to provide appropriate support for the traveler as best they could. the end result, after much investigation, was the issuance of restraining orders against the clerk and termination of his employment. it turned out that the supplier hadn't done sufficient background checks on its employees and the clerk in question had a history of similar behavior. with information and training on acceptable behavior when traveling to these destinations and understand how to get help should they find themselves in a difficult position or a potential victim of a hate crime. saying the wrong thing, at the wrong time, in the wrong place, or wearing something inappropriate, or acting a certain way that isn't culturally acceptable in some parts of the world, can put travelers in real danger. how does your company prepare your travelers for facing these challenges as they travel? while some laws that promote discrimination that can lead to hate crimes are more notable in the press, such as the antigay propaganda law put into place in russia prior to the sochi olympics, some are less obvious to the average business traveler, such as up to 14 years in prison in nigeria for simply being gay, or india's supreme court ban on gay sex, or the execution of homosexuals in saudi arabia. in april 2013, an 82-year-old man wearing islamic dress was attacked and killed while walking home from his mosque in birmingham, uk, by a 25-year-old ukrainian student who told police that he murdered the victim because he hated "nonwhites." 11 according to "one in six gay or bisexual people has suffered hate crimes, poll reveals," a 2013 article in the the guardian (uk), some 630,000 gay and bisexual people in the uk have been victims of hate crimes in the previous 3 years, prompting police to take the problem more seriously. 12 such examples continue to support the notion that a crisis doesn't need to be an incident that impacts large numbers of people at once. quite often they involve one person at a time, and they don't need to take place in a high-risk destination, thus discounting the argument by some companies that trm isn't necessary for those who don't travel to high-risk destinations. a crisis can happen anywhere for many different reasons, affecting as few as one person at a time. although privacy laws generally prohibit companies from asking employees about sexual orientation, making sure that all employees (of any sexual orientation) understand the dangers that face lgbt travelers, can help to mitigate risks for themselves (if lgbt, traveling with an lgbt person, or if perceived as lgbt) or their fellow travelers, considering that there are many countries still in the world where homosexuality is a crime. • in mauritania, sudan, northern nigeria, and southern somalia, individuals found guilty of "homosexuality" face the death penalty. the last five years have witnessed attempts to further criminalize homosexuality in uganda, south sudan, burundi, liberia, and nigeria. • south africa has also seen at least seven people murdered between june and november 2012 in what appears to be targeted violence related to their sexual orientation or gender identity. five of them lesbian women and the other two were non gender-conforming gay men. • in cameroon, jean-claude roger mbede was sentenced to three years in prison for 'homosexuality' on the basis of a text message he sent to a male acquaintance. • in cameroon, people arrested on suspicion of being gay can be subjected to forced anal exams in an attempt to obtain 'proof' of same-sex sexual conduct. • in most countries, laws criminalizing same-sex conduct are a legacy of colonialism, but this has not stopped some national leaders from framing homosexuality as alien to african culture. • a cave painting in zimbabwe depicting male-male sex is over 2000 years old. • historically, woman-woman marriages have been documented in more than 40 ethnic groups in africa, including in nigeria, kenya, and south sudan. • in some african countries, conservative leaders openly and falsely accuse lgbti (lesbian, gay, bisexual, transgender, and intersex) individuals of spreading human immunodeficiency virus (hiv)/acquired immune deficiency syndrome (aids) and of "converting" children to homosexuality and thus increasing levels of hatred and hostility towards lgbti people within the broader population. lgbti individuals are more likely to experience discrimination when accessing health services. this makes them less likely to seek medical care when needed, making it harder to undertake hiv prevention work for, and to deliver treatment where it is available. in many government programs they are not identified as an "at risk" same-sex marriage 1 laws restricting freedom of expression and association kidnapping and ransom activities targeting military enemies and employees of multinational companies who are from countries considered to be enemies to terrorist causes, are the primary fundraising strategies of organized terrorist groups. even for companies that do not routinely visit high-risk locations, having some sort of policy in place for proof of life, which is the means for verifying that a captive is in fact who the captors say they are and that the captive is still alive, such as by providing information that only the alleged victim would know, can save valuable time in a sensitive situation and perhaps someone's life. additionally, a kidnap and ransom insurance policy is something for all companies to consider, with an understanding that kidnappings happen at anytime around the world, and largely go unreported. according to the guardian news and media (uk), approximately 75% of fortune 500 companies have kidnap and ransom (k&r) insurance. k&r insurance originates from 1932, when it was first offered by insurance provider lloyd's of london, after the kidnapping and murder of american aviator charles lindbergh's infant son. in 2015, the uk's home secretary, theresa may, supported and passed the uk's "counter-terrorism and security act of 2015," which prohibits insurers from paying claims used to finance payments to terrorist groups. the uk is where many of the world's k&r insurers operate. many insurers insist that it shouldn't matter because they claim to not pay or finance ransoms, but instead pay claims for services and expenses related to negotiating the release of the captives in question, medical and counseling treatment, along with things such as employee salaries while in captivity. it's difficult to obtain information from clients who hold such policies, because most policies have strict cancelation provisions to prevent a company from disclosing the fact that it has such a policy. details specific to restrictions on insurance related payments associated with terrorist related ransoms in the uk's counter-terrorism act of 2015 can be found at http://www.legislation.gov.uk/ukpga/2015/6/section/42/enacted. companies with any travel to high-risk destinations have a responsibility to provide some kind of survival training for those travelers, in addition to access to resources and provision of current intelligence before, during, and sometimes after their travel is complete. to complicate matters, based upon a 2013 g8 summit, an agreement was made to not pay ransoms to kidnappers for fear that the money was directly funding terrorist organizations; therefore, some countries, such as the uk, are enacting laws to prohibit the transfer of funds for hostages in certain circumstances or locations. senior foreign and commonwealth office (fco) officials in the uk estimate over $60 million has been paid in ransoms to terrorists during the 5 years leading up to the 2013 report. it isn't safe to assume that your government will help bankroll your hostages' release if you find yourself in such a situation, and you may face criminal prosecution if you offer a ransom to specific groups. people who commit kidnappings do so for a variety of reasons, including political or religious views, but most often they are purely financially motivated. perception is everything, so identifying traveling employees of large or multinational companies, makes them an easy target, thus the reason for using code names for arriving ground transportation signs. of course, how one dresses and where one goes, also have an impact on how victims are targeted (i.e., wearing expensive jewelry, standing out from the crowd in expensive clothing or making it clear that you work for a large multinational company [clothing with logos or meeting drivers with company names on greeting placards]). later in this book, kidnappings are explored in greater detail. some statistics will be presented that both companies and travelers should find serious enough to change their perception about the possibility of kidnapping happening to them. kidnapping incidents should be accounted for in all corporate crisis response plans. while some medical emergencies may require the need for evacuation, it is more common to receive calls for assistance involving acute or preexisting conditions that can be diagnosed and treated locally. lost or stolen medication, allergic reactions to food or the environment, and unexpected illnesses, are common occurrences when calling a corporate crisis response hotline. however, in some instances, individuals must be quickly assessed to determine if adequate medical care can be obtained locally, and if not, a decision must be made to evacuate that person to the closest logical facility capable of treating the individual. many domestic health insurance plans do not provide coverage for individuals traveling abroad, and often when they do, they require out of pocket expenditures for services; in other words upfront payment by the patient, leaving the patient to file for reimbursement upon the patient's return. more often than not, in these circumstances, this equates to thousands of dollars that most people do not have immediate access to, especially on short notice. the cdc recommends that if domestic u.s. coverage applies, and supplemental coverage is being considered, the following characteristics should be considered when examining coverage for planned trips: • exclusions for treating exacerbations of preexisting medical conditions. the company's policy for "out of network" services. • coverage for complications of pregnancy (or for a neonate, especially if the newborn requires intensive care). • exclusions for high-risk activities such as skydiving, scuba diving, and mountain climbing. • exclusions regarding psychiatric emergencies or injuries related to terrorist attacks or acts of war. • whether preauthorization is needed for treatment, hospital admission, or other services. • whether a second opinion is required before obtaining emergency treatment. • whether there is a 24-hour physician-backed support center. additionally, one should have coverage for repatriation of mortal remains, should someone covered unfortunately die while away from their home country. because so many domestic healthcare plans do not provide for international coverage and evacuations services, companies must provide comprehensive coverage for their employees globally and employees should be fully aware of what is included in said coverage. employees may decide that what the company offers is not enough by their personal standards and consider purchasing additional coverage to supplement what the company provides. when purchasing different types of travel-related insurance, it's important to understand the differences between the different products offered in the marketplace, especially the differences between consumer and business travel products. options can include: 1. travel insurance, which provides trip cancellation coverage for the cost of the trip, delays or interruptions, and lost luggage coverage. it can and often does provide some amount of emergency medical and evacuation coverage, but often requires payment of medical expenses by the insured in the country where services are rendered (versus direct payment by the insurer), and the filing of paperwork for reimbursement upon the insured's return home. buyers should be mindful of whether or not the policy provides guaranteed payment directly to the suppliers in question. 2. generally, some consumer based travel health insurance pays for specified or covered emergency medical expenses while abroad; however, such insurance (and others) may require that the individual pay any medical expenses in the country where services are rendered and file for reimbursement upon the individual's return home. insured parties should always check whether guaranteed payment to providers is included in coverage, as with some consumer-based travel insurance. medical evacuation coverage is for medical transport to either the closest available treatment facility or the insured's home country for medical attention, depending upon the policy and the situation or medical condition. considering the cost of medical evacuations, depending upon the distance and the services required for the transport, expenses can vary greatly, but can be very costly. it is recommended that policies have greater than us$100,000 in coverage (some provide up to us$500,000 or more), and include transportation support for an accompanying loved one or family member. policies with less than us$100,000 in coverage should be reconsidered for possibly not providing enough coverage. buyers should note that these products cover primarily just the evacuation and not medical services or treatments. 4. medical membership programs can cater to individual travelers on a per-trip or annual basis or on a companywide basis. these programs can vary widely by provider and membership type, but can potentially provide access to network services resources with separate liability for payment, or network access with some coverage for payment of specified services rendered based upon premiums and policy guidelines. the lii at cornell university law school provides a third-party overview of workers' compensation. 13 variable forms of this type of coverage are provided at both the state and federal levels in the united states, with similar forms of workers' compensation laws also in place in select countries around the world. these laws are typically intended to provide some form of medical benefits and wage replacement for employees who are injured on the job. this coverage is often provided to employees in exchange for releasing their right to sue their employer for negligence, sometimes with fixed limits on payment of damages. employers need to understand whether the workers' compensation coverage that is applicable and in place for their and their employees' protection, covers international travel. in some cases, additional policies or riders will be required to provide coverage for travel outside of the traveler's home country or state. additional considerations to this kind of coverage should be as to "when" and "where" the coverage is in effect outside of a company office or facility (e.g., business travel). in some cases this may limit employer liability, but whether it does varies by jurisdiction and circumstance. considering how workers' compensation benefits have been reduced in recent years, especially in the united states, 14 much consideration needs to be given to assessing what coverage is needed for traveling employees above and beyond workers' compensation, and coordinated with crisis response protocols and risk management support providers for efficient case management, claims, and documentation. all of these considerations provide a strong business case for why employers should have unique and specific programs in place for medical services and evacuations for employees and contractors traveling abroad in addition to their standard domestic health care plans and workers' compensation plans. no traveler should embark on a business trip without the complete confidence that medical coverage and resources not requiring their personal, out-of-pocket expenditure is being provided by their employer. a 2014 study that included disclosures from 767 institutional investors, representing us$92 trillion in assets, provided by sustainable-economy nonprofit gross domestic product (gdp), stated that in addition to increased physical risks that are being caused by climate change, climate change is already impacting their bottom line. one major uk retailer has stated that 95 percent of its global fresh produce is already at risk from global warming. according to the french foreign minister, commenting at a 2015 un conference in japan, two-thirds of disasters stem from climate change. comments were made days after the 4-year anniversary of the fukushima nuclear disaster that killed approximately 19,000 people in 2011 from an earthquake and tsunami. margareta wahlstrom, the head of the un disaster risk reduction agency, stated that preventative measures provided a very good return as compared to reconstruction. un secretary general ban ki-moon asked world nations to spend us$6 billion dollars a year on prevention. an important aspect of both a company's trm and business continuity plan is to determine what are the unique dangers or risks associated with where your offices or facilities are located, as well as where you travel to on a regular basis, making emergency evacuation and safety plans in the event that a unique incident occurs, such as the following case study related to the 2011 japanese earthquake and tsunami. it is important to know what local governments have made available in close proximity to your travelers' or expats' locations in terms of resources, or something that your company itself may provide, such as "vertical evacuation points" to escape rising tsunami flood waters. these vertical evacuation points may be in a building that is tall enough to support large numbers of the local population at a high water level, with ample support systems and supplies. not understanding and communicating these plans to your people when appropriate could exact a cost in lives, money, and corporate reputation. * american red cross, "japan earthquake and tsunami: one year update, march 2012," http:// www.redcross.org/images/media_customproductcatalog/m6340390_japanearthquaketsunami_ oneyear.pdf. on march 11, 2011, a 9.0 magnitude earthquake created a 124-foot tsunami. more than 19,000 people died or were presumed dead, with more than 400,000 people evacuated and more than 12.5 million people impacted across the country.* for the first time in more than 190 years, iceland's eyjafjallajökull volcano erupted on march 20, 2010, with massive lava flows and ash clouds that closed most of europe's commercial air space for several days, but then the ashcloud spread to other parts of the world, stranding millions of air travel passengers. based upon the composite map from the london volcanic ash advisory centre for the period april 14 to 25, 2010, one can clearly see the massive geographic scale of this incident, and why almost all commercial and private air transportation was prohibited and severe shortages of lodging and emergency shelters occurred. whether or not you believe in climate change and the reasons behind it, the statistics demonstrating the depletion of the world's ice sheets and glaciers, warmer ocean waters, and consistent year-over-year sea-level increases, will touch most multinational companies profoundly in the 21 st century. the new york times states that sea levels worldwide are expected to rise 2 to 3 feet by the year 2100, but rates are not occurring evenly worldwide. the times' referenced study states that the atlantic seaboard could rise by up to 6 feet, with boston, new york, and norfolk, virginia, named as the three most vulnerable areas. 15 if current warming trends and rising sea levels continue, cities such as london, bangkok, new york, shanghai and mumbai could eventually end up under water according to greenpeace, displacing millions of people and causing massive economic damage. 16 consider a weather event the size of 2012's hurricane sandy, which tips the scales of expected water levels in a low-lying urban city, and results in the displacement of thousands or millions of people, with your travelers or expatriates stuck in the middle of it. when evacuation is not an immediate option, questions regarding the availability of safe accommodation, power, food, and water become priorities as demand far outweighs supply under such circumstances. these occurrences are much more common now than in our recent past. whether working in their local office or manufacturing facility, or traveling for business, many companies have employees with disabilities. although building or facility laws and rules may require designated escape routes, ramps, and elevators/lifts in the event of an emergency such as a fire, what about plans for when a disabled traveler is in transit or at a hotel? special considerations need to be made for disabled travelers in the event of a medical or security-related evacuation, such as: the need to relocate travelers can be caused by any number of factors, but before the decision to evacuate is made (usually at considerably more expense than traditional commercial air travel), someone with access to quality intelligence has to make the call as to whether to "shelter in place," assuming safe shelter is available, or to evacuate to the closest safe location. nonmedical causes for evacuation could be biohazards (e.g., the fukushima nuclear facility damage in japan), or civil unrest, or incoming natural disasters. to evacuate or not to evacuate requires thoughtful planning and resources, in order to insure that companies aren't competing with the rest of the world in a reactive situation where many others were caught off guard as well. ijet case study-ijet and the south sudan evacuations in december 2013, ijet international provided continuous monitoring, intelligence, and analysis of the situation involving heavy ethnic fighting in south sudan to existing clients with operations in the country. support included providing real-time situational updates, establishing direct lines of communications with client personnel, and arranging for safe havens and security evacuations. on december 18, 2013, the situation worsened to include the closure of the juba international airport. during the first 2 days of fighting, prior to the airport closure, more than 500 people were killed and more than 800 wounded in the violence. during this time, several client personnel traveled across the country's borders to safe havens, but soon after the airport closure, with mounting concerns about large numbers of refugees, those borders quickly closed. ijet successfully evacuated its clients within the first 3 hours of the airport's reopening, bringing in a 15-seat light-passenger aircraft from nairobi, kenya, performing some of the first successful group evacuations from this incident without injury. the ijet case study excerpt is an example of why a company's trm program cannot consist of technology alone, and discounted news being marketed as intelligence. in situations like these, quality intelligence is what saves people's lives. in this instance, quality intelligence was critical to the coordination of ijet's incident management team's on-the-ground services and support, which lead to not only evacuating its clients, but knowing when was the right time to move its clients to the airport and into the air. some medical evacuation services do not provide security-based evacuations, while some can offer both. companies should consider that one provider for both medical and security services and support, intelligence and insurance, might not always be the best solution. some companies select one provider for their terms and coverage for medical services, support, and evacuations, but another provider for security-related intelligence, services, and evacuations. there are even those companies with multiple providers for each medical and security service in different parts of the world, working with completely separate insurance providers to pay for the services rendered. each company must consider the coverage and resources currently available to them via their existing insurance relationship, and then solicit proposals for coverage based upon a clear outline of what the company needs are based upon claims history. ultimately, companies need a program that can coordinate with all contracted services and insurers, providing a seamless experience for travelers and administrators, and consolidated documentation. the term "open booking" refers to a booking made by a traveler that was made outside of their managed corporate travel program, avoiding usage of any contracted travel management company (tmc). technical advances have found ways to incorporate reservations data from multiple websites or suppliers for a traveler's trips into one place for reporting and calendar population. however, to properly capture this data, there are two primary methods available. the first is to allow the applications the ability to scan our inbox for travel-related e-mails and import the data accordingly. the second method is having travelers or independent suppliers e-mail reservation confirmations to an application or "parser," which can parse the data into a standardized database. with some major travel suppliers (such as airlines, for example) there are "direct connections" from their websites to some of these applications. however, in the absence of a direct connection, if you cannot get beyond the security concerns of a third-party application scanning your inbox, one cannot guarantee the automatic capture of 100 percent of open booking data because of human error. for that reason and many others relative to policy and program management, and because of the high probability of human error, for effective trm, open booking should not be promoted as a primary booking method within a managed travel program. however, there is a place for open booking technology within a managed travel program: to help capture data from travel data normally considered "leakage," which is often not collected for reporting. such data can originate from conference-or meeting-based bookings made via housing authorities or meeting planners, or perhaps for travel that is booked and paid for by a client. companies who allow open booking for all travel struggle to effectively locate travelers in a crisis, disclose any potential risks or alerts, or provide services to some travelers in the event of a crisis. outside of suppliers with direct connections to open booking applications or parsers, even when your travelers are trained to e-mail those open booking itineraries to the required application for data capture, employers have no control over when they do this. within a managed program (via most tmcs), all new bookings, modifications and cancelations are usually updated in the database in real time or close to it, providing employers with ample opportunities to mitigate risk in a number of ways when time is of the essence. some well-known companies, offering travel-related solutions, claim that open bookings equate to more traveler choice and that their solutions can bridge the gap for any potentially missing data. when using an open booking application's itinerary data for security purposes, changes and cancelations can be a major issue. some applications require user intervention to manually delete trips that have been canceled, or to resubmit trips for changes unless an update can be e-mailed or picked up by an e-mail scan. consider a situation where a trip is booked and ticketed via an airline website, the itinerary is e-mailed to the traveler, who either allows their inbox to be scanned or they forward the e-mail to the open booking application. days later, the traveler needs to cancel that booking and rebook with another airline to travel with someone else from the company. the arrangements are made with the new airline, but the traveler forgets to delete the original trip in the open booking application. now there are two trips in the system for the traveler. imagine the confusion this could cause with employers if similar circumstances impacted multiple employees at the same time? a good managed travel program can still provide a variety of options, including easy methods of making reservations, yet still capture critical reservations data needed to effectively manage risk for business travelers. trying to manage risk with a completely unmanaged booking process for the sake of open booking, even if it did offer more traveler choice, is not worth the risk, considering that in a crisis you have a higher likelihood of inaccurate data unlike if the traveler had booked via your managed program (via a contracted tmc working in conjunction with your trm provider). does that mean that managed program data is perfect? no, but if implemented properly, reservations data can be more tightly controlled. on january 15, 2009, when us airways flight 1529 went down in the hudson river in new york city, a regional office for an employer received a phone call from an employee's relative who was hysterical, insisting that his family member was on that plane. the office in question contacted their tmc, but was unable to obtain any information on the traveler, so they then turned to the travel manager. by this time, the inquiring family member had intentions of coming into the office because he wanted "some answers," for which there were none at the time. human resources suggested that the relative contact the crisis response hotline, while dispatching security to the office in question to protect the facility and its personnel. human resources also advised the person to stay home for any communications, and for their safety, considering the person was so upset. it turns out that the traveler in question was on a legitimate business trip, but that the traveler had purchased the trip online (outside of the employer's managed program), with the traveler's personal credit card, and without using an open booking application for itinerary data capture. because of this situation, it was difficult or nearly impossible to get helpful intelligence to the traveler or the traveler's family or to provide adequate resources and support, and had there been a death or severe bodily injury involved, the traveler wouldn't have been eligible for their corporate credit card's accidental death and dismemberment (ad&d) coverage. consider the personal losses of a business traveler whose hotel room was just broken into. what if as a result of such a theft, the traveler's identity was stolen? will your company support the needs of the traveler to ensure that the traveler's assets and identity are preserved? the traveler wouldn't have been where the traveler was if it weren't for the business trip! identity theft has reached epidemic proportions globally, with plenty of statistics published by consumer advocacy groups and government agencies, such as the u.s. federal trade commission. the u.s. federal trade commission's 2014 consumer sentinel network data book listed identity theft as the top reported complaint by consumers for the 15th year in a row, with approximately 332,646 complaints. the act of traveling for business presents many opportunities for a traveler to be exposed to scam artists looking to steal the traveler's identity. while taking precautions may be inconvenient and time consuming, there are many things that business travelers can do to reduce their chances of having their personal information stolen, such as: • keep a copy of all account numbers and relative account information in a safe place that is separate from where debit and credit cards are kept. • put mail and newspaper delivery on hold. this can prevent mail theft or an indication that the person is away, which can lead to the person's home being robbed. • don't travel with a checkbook; use only credit cards and cash. • don't use debit cards as pins (personal identification numbers) can be stored in some card reader devices and if the information is stolen, criminals could steal all of the cash available in the account(s) linked to the debit card. • notify credit card issuers prior to travel, especially if traveling internationally, so that they can authorize legitimate charges and notify the card holder promptly if activity on the account doesn't match their records. • use vpns (virtual private networks) when using the internet. if the traveler's company doesn't provide one, the traveler should purchase their own annual subscription. what if your employee had prescription medicine that may have black market value and it got taken as well? now, a theft has turned into a potential medical issue. ask yourself the following: • some medicines cannot be refilled before their due date, and other medicines are not easily refilled before their due dates. do you have the resources and support available globally (24 × 7 × 365) to get those medicines replaced? • do you have the means to get the traveler replacement medicine before the traveler experiences any serious medical issues? • what kind of medical support do you have available, particularly outside of the traveler's home country, should the traveler need immediate medical attention? having someone steal property from your hotel room or safe is bad enough, but when theft has happened, the event itself ends quickly. but if your computer is hacked, the problem could linger in many ways. hotels are ideal places for business travelers to fall victim to hackers who not only may want access to some of your intellectual property, but to your identity as well. referenced in subsequent chapters, there are tips about using hotel and public access wi-fi, if you must use them. however, by whatever means you access the internet while on business travel (e.g., personal hotspot, or wi-fi with vpn, or other tools), try to not conduct any financial transactions or to log into financial-related websites while traveling. losing personal passwords to e-mail accounts or other personal use websites can not only be financially damaging to the individual, but can occasionally be humiliating when private information is made public. the most important thing to remember when faced with a mugging or pickpocketing incident is to not resist in the event of any confrontation and do not pursue assailants. things can be replaced, but not your life or well-being. your first priority should be to get away to a safe place, typically a business or well-lit public place with lots of people, where you can contact the authorities. according to the united states cdc (centers for disease control and prevention), the percentage of adults aged 55-64, during the years 2009 to 2012: • percent of persons using 1-4 prescription drugs in the past 30 days: 55.6% • percent of persons using five or more prescription drugs in the past 30 days: 20.3% source: http://www.cdc.gov/nchs/data/hus/hus14.pdf#085 according to a 2013 report by cbs news atlanta, approximately 7 in 10 americans use prescription drugs. 17 consider that with such a large percentage of the working population taking prescription medications regularly, people taking medications need a basic understanding and awareness to always do their research prior to international travel about bringing the drugs with them into another country. in general, most countries allow up to a 30-day supply of legitimately prescribed medications, in their original bottle. more than 30 days of prescription medication on a traveler can be considered a violation of many country's laws, particularly when it comes to controlled substances, such as narcotic pain medication or psychotropic drugs. in some cases, it simply isn't enough to carry the original prescription bottles with medication in them; travelers may be required to carry additional documentation along with having filed advance approval forms to be in compliance with the jurisdiction in question. in particular, narcotics or psychotropic drugs must have extensive paperwork prepared by your doctor and submitted to the government of the country that you are visiting well in advance of travel, in order to process your paperwork for approval. employers must consider providing this kind of information to travelers with their pretrip briefings or risk reports, where applicable. the possibility of medicine being confiscated and/or criminal charges filed against someone for lack of approval to transport controlled substances into some countries is very real, and could cost someone their life if stranded on international travel without their medicine. tclara, a travel data analytics firm, has developed a scoring system to track how much wear and tear each traveler accumulates from his or her travels. the goal is to predict which road warriors are at the highest risk of burnout, so that management can intervene in a timely manner. the system uses a company's managed travel data to score a dozen factors found in each traveler's itineraries. trip friction 18 points are assigned to factors such as the length of the flight, the cabin, the number of connections and time zones crossed, the time and day of week of each flight, etc. this allows for traveler-specific and companyspecific benchmarking, which in turn helps senior executives to influence travel policy, procurement strategy, and traveler behavior to optimize a managed travel program. push travelers through too many pain points, and the traveler may soon find reasons to not take the next trip. for example, think about flying coach from chicago to singapore, or taking a short haul connection for a lower fare. tighten the travel policy too much, and you could have recruiting and retention problems, which could have serious cost or business implications. companies shouldn't focus solely on minimizing the transaction cost of their trips; instead, they should focus on minimizing the total cost of traveling. that's the sum of the trip's transaction cost plus the cost of traveler friction (the black curve in the figure below) or the "total cost paradigm." to put trip friction into perspective, tclara provides two trip examples (refer to the figure below) showing a low level of trip friction in "trip a" versus a higher level in "trip b." according to tclara (refer to the figure below), their data shows a correlation between trip friction and higher numbers of road warrior or frequent traveler turnover. trip friction is clearly correlated with higher road warrior turnover. while strong travel policies under managed corporate travel programs are critical to successful trm (versus unmanaged, open booking allowances), there is a delicate balance between cost savings, safety, traveler satisfaction, and, very importantly, business continuity. trip friction and traveler friction are good examples of the link between trm and operational risk management (see chapter 9), which shows how losses of productivity or employees managed under the guise of trm can impact company production and/or success. personal well-being of travelers might be the most surprising of topics for consideration, but it certainly is relevant in context with trm programs today. believe it or not, employers must be as cognizant of their employees' or contractor's mental wellbeing as of their physical safety. stressed out, tired, or even unhappy employees can represent lower productivity and a higher threat of risk. from something as simple as knowingly requiring someone to work in a stressful environment without trying to make it better, or just working them to excess, can cause an employee to suffer various forms of posttraumatic stress or depression. however, it can also be as extreme as requiring employees to work in a stressful situation without being properly trained or counseled, as was the case with some flight attendants who may have been forced to immediately fly again out of new york after witnessing the 9/11 attacks, when the commercial flights began operating again, without consideration of stress or trauma, proper treatment, and counseling. to the extent that employers monitor and evaluate the physical safety of employees or contractors in the workplace, they must now take notice of the level of employee/contractor stress and contribute to overall happiness. it turns out that employees with high states of well-being have lower health care costs. 19 it's unfortunate that employers must usually see a financial benefit associated with such things before implementing them, but in addition to health care costs, if people are happier and healthier, it stands to reason that they are also more productive. the cwt solutions group conducted a study to shed light on the hidden costs of business travel caused by travel-related stress. their aim was to understand and measure how and to what extent traveler stress accumulates during regular business trips. they defined a methodology and a set of key performance indicators (kpis) to estimate the impact that this travel-induced stress has on an organization (see "the carlson wagonlit travel solutions group study"). the scope of the study includes data from 15 million business trips booked and recorded by carlson wagonlit travel (cwt) over a 1-year period. they followed a divide-and-conquer approach: each trip was conceptually broken down into 22 potentially stressful activities covering pretrip, during trip (transportation-and destination-related elements), and posttrip. associated stress was measured based on the duration and the perceived stress intensity for each activity. in essence, each of the 22 steps of the trip was viewed as having two components: stress-free time and lost time. to quantify the effects of stress, we introduced the following kpis [key performance indicators]: the travel stress index (tsi) across all trips booked through cwt is 39%. our results show that the actual lost time is 6.9 hours per trip, on average. the largest contributions to this lost time arise from flying economy class on medium and long-haul flights (2.1 hours) and getting to the airport/train station (1.1 hours). the financial equivalent of this 6.9 hours is us$ 662. the lost time greatly depends on the type of trip taken: an increase in the transportation time typically generates an increase in the lost time. the average actual lost time values by trip type are: finally, the study indicates that the impact of stress can be reduced, but not entirely eliminated. they analyzed the tsi on a client-by-client basis and found out that companies can expect to control, on average, 32 percent of the actual lost time. in a previous publication [ref . 1] , cwt solutions group presented the perceived stress reported for 33 activities related to a typical business trip. the current study incorporates 22 of these factors (table 1 .1), including nine of the be provided directly to suppliers for services as needed, or will prepayment be required by the family or loved ones, only to request reimbursement later? if it can be avoided, such understanding can reduce stress associated with paperwork, authorizations, and payment. according to the cornell university law school, in general terms, intellectual property is any product of the human intellect that the law protects from unauthorized use by others. the ownership of intellectual property inherently creates a limited monopoly in the protected property. intellectual property is traditionally comprised of four categories: patent, copyright, trademark, and trade secrets. in summary, if you are in business, you likely have some intellectual property to protect. it could be an idea, or simply a process that you use, which gives you a competitive edge. most people think of a stolen laptop or mobile phone when they think of vehicles for stolen intellectual property, but a far more common vehicle is a flash drive, which most business travelers carry with them today on business trips and aren't monitored or regulated in the same manner as phones, computers, or tablets. companies should either limit the use of flash drives to those drives that have some level of fips (u.s. federal information processing standard) to encrypt the data and/or destroy the data should the drive be tampered with physically in an attempt to access its contents. information on current fips standards (fips 140-2) and announcements regarding the upcoming fips 140-4 standard, can be found by visiting http://csrc.nist.gov/ groups/stm/cmvp/standards.html#05. many companies have policies specific to certain countries whereby, when travelers intend to visit the countries in question, the travelers either cannot take laptops or standard mobile devices with them, or the travelers must take "clean machines" or hardware designed for travel specifically to countries with high numbers of intellectual property theft. some of this hardware may have special configuration or software to add layers of protection, in addition to not storing important files locally (i.e., cloud computing), or transportation of valuable files is done via one-time-use usb flash drives. because there are times when identifying intellectual property thieves can be nearly impossible, one might not have the opportunity to take advantage of any legislation or treaties. however, it is good to know that programs are developing and in place to try and protect intellectual property owners, such as the trips (trade related aspects of intellectual property rights) agreement from the wto (world trade organization). trips was designed to set some standards for how intellectual property rights are protected around the world under common international rules. these trade rules are seen as a way to provide more predictability and order, and a system for dispute resolution, providing a minimum level of protection for all wto member governments. for more details on the trips agreement, see https://www.wto.org/english/ thewto_e/whatis_e/tif_e/agrm7_e.htm. as of may 2015, 36 countries place various forms of restrictions for the entry, stay, and/or residence of people who are hiv-positive. 21 in 2009, the united states removed its entry restrictions for people living with hiv, which received considerable media coverage and is believed to have had an influence on many another country's legislation on the matter, as the number of countries with such restrictions has declined from 59 in 2008 to 36 in 2015. restrictions vary from country to country, but are broken down into the following categories: 22 reminder: although this text provides various reference materials found on the internet, there is no substitute for or comparison to the quality of medical and security intelligence created, monitored, and provided by qualified risk intelligence providers, which are at the core of employer-managed trm programs. one specific reason for the importance of risk intelligence providers is because guidelines, laws and requirements regularly change. what is surprising to realize is that some of the countries from which an hivpositive traveler could be deported if the traveler's hiv status were known, are countries that are common destinations for many business travelers today. imagine a security check that uncovers prescription hiv treatment medication in a country where there are entry restrictions? this is a difficult position for employers because of the privacy concerns of employees or travelers and their medical records, which are not typically the kinds of records or information that a person shares with employers. however, just as with prescription medications that people can travel with, employers need to provide appropriate training and information to travelers going to places where hiv concerns may be an issue. while adding this kind of information on top of standard risk and policy disclosures may be an extensive and painfully large amount of information to read and understand prior to travel, employers have a duty to provide it, and travelers have a duty to understand it and act accordingly if one or more of any disclosed travel restrictions apply to them. in some of the more strict countries with legislation that allows deportation of hiv-positive travelers, deportation often doesn't apply to travelers connecting or in transit only. however, employers and travelers have to decide whether or not they want to take such a chance. some countries require medical exams for those who intend to stay longer than 30 days, and if hiv is discovered, doctors are required to report it to the government, and the law will be administered relative to the country in question. exploding the myths: pandemic influenza center for infectious disease research and policy (cidrap) 10-point framework for pandemic influenza business preparedness pandemic planning and your supply chain four-fifths of businesses foresee severe problems maintaining operations if significant h1n1 flu outbreak pandemic influenza planning: a guide for individuals and families at the time of this publishing, the following countries maintain strict regulations for travelers with restricted medications (see full list in the incb "yellow list measuring traveler wear and tear too much travel can burn many a road warrior out. the costs of this burnout are well known: lost productivity, increased safety risks, poor health, increased stress at work and home, unwillingness to travel, and, ultimately, increased attrition. top 12-those with scores above 60/100. the remaining 11 factors are either challenging to quantify (e.g., "eating healthily at destination") or require certain data that was not available at this time. several stress factors, such as flight delays, mishandled baggage, and traveling to a high-risk destination, require the usage of external data stress triggers for business travel is a leading publisher of flight information to travelers and businesses around the world sita (www.sita.aero) com) is an intelligence-driven provider of operational risk management solutions, working with more than 500 multinational corporations and government organizations 2. having adequate medical supplies available during and after evacuation transportation.3. an accessible method of handicap transport. 4. addressing any additional criteria needed to determine whether the disabled traveler should be transported or be sheltered in place. a. deciding who makes the call about whether it is safer to "stand by for assistance." 5. determining whether the transport destination is handicap accessible. 6. determining whether the transport destination has adequate food, shelter, and supplies for any special needs. 7. determining whether employers prepared to incur any additional costs relative to evacuating disabled travelers. a. determining whether adequate resources are available. b. identifying the risks or costs for lack of planning.the adoption of this convention is regarded as a milestone in the history of international drug control. the single convention codified all existing multilateral treaties on drug control and extended the existing control systems to include the cultivation of plants that were grown as the raw material of narcotic drugs. the principal objectives of the convention are to limit the possession, use, trade in, distribution, import, export, manufacture, and production of drugs exclusively to medical and scientific purposes and to address drug trafficking through international cooperation to deter and discourage drug traffickers. the convention also established the international narcotics control board, merging the permanent central board and the drug supervisory board. article 36, penal provisions of single convention on narcotic drugs, 1961, as amended by the 1972 protocol amending the single convention on narcotic drugs, 1961, provides:1. a. subject to its constitutional limitations, each party shall adopt such measures as will ensure that cultivation, production, manufacture, extraction, preparation, possession, offering, offering for sale, distribution, purchase, sale, delivery on any terms whatsoever, brokerage, dispatch, dispatch in transit, transport, importation and exportation of drugs contrary to the provisions of this convention, and any other action which in the opinion of such party may be contrary to the provisions of this convention, shall be punishable offences when committed intentionally, and that serious offences shall be liable to adequate punishment particularly by imprisonment or other penalties of deprivation of liberty. b. notwithstanding the preceding subparagraph, when abusers of drugs have committed such offences, the parties may provide, either as an alternative to conviction or punishment or in addition to conviction or punishment, that such abusers shall undergo measures of treatment, education, after-care, rehabilitation and social reintegration in conformity with paragraph 1 of article 38. unfortunately, people sometimes die while away from home on business. making arrangements to transport their remains across international borders can be complicated and expensive, as legislation and protocols vary greatly from country to country, as do suppliers who will provide such services. don't assume that your tmc will or can handle this for you. usually these situations are handled by medical emergency or insurance providers. the following items should be covered in repatriation of mortal remains insurance:• if passing takes place outside of a medical facility, adequate transportation (ambulance, airplane, or helicopter) equipped with proper storage and handling capabilities for the body during transport to the closest appropriate medical facility prior to international transport.• treatment costs incurred (including embalming).• legally approved container for shipment of the remains.• transportation costs for the deceased and an accompanying adult to the country of residence.• cremation if legally required (conditional).other coverage may be included for things such as hotel accommodations preor posttreatment prior to the passing of the insured, but coverage will vary widely between providers. under such stressful circumstances, it is very important for the insured's family to understand the claims process and coverage, such as will payment key: cord-022367-xpzx22qg authors: murphy, peter e. title: risk management date: 2009-11-16 journal: the business of resort management doi: 10.1016/b978-0-7506-6661-9.50014-0 sha: doc_id: 22367 cord_uid: xpzx22qg nan risk management has been placed at the end of this book in affirmation of its crucial and central role in resort management, and as a prime example of pulling together the external and internal elements of parts b and c. while some may think risk management is a recent phenomenon, a result of global warming and terrorism, it has been associated with resort management for a long time and in a variety of ways. in normal business, financial risk is a regular occurrence that should be recognized and managed like other factors of demand and supply. however, with the taking-in of guests comes an extra responsibility, known as 'duty of care', where management is obliged to protect their guests from harm to the best of their ability. on the demand side guests are often looking for excitement and the spectacular, which can put them at risk. those who seek excitement in adventure tourism, when they challenge themselves or look for an adrenalin rush, purposely place themselves at risk and it is up to resorts to ensure the real risk is minimalized by managing the situation. even those who have not come to a resort to exert or excite themselves regularly demand spectacular views and sunsets that often require building on risky sites and in nonconformist style. the sounds of the sea and uninterrupted tropical sunsets attract resorts to the water's edge in areas where hurricanes and cyclones occur with regularity. in the mountains, similar demands for spectacular views place buildings at crests or on steep slopes where local climatic conditions are at their extreme and avalanches can occur. on the supply side risk is present at the very start, requiring a correct interpretation of market research and feasibility studies over the 30-50year life span of many resort investments. risk is present in the location of many resorts on the 'edge of civilization', well removed from regular infrastructure and services that are the basis of quality service experiences. it is present in the operation of resorts where guests come to participate in challenging activities, regular sports or simply to unwind, a process that inevitably leads some of them to leave natural caution and common sense behind at home. it is not surprising that 'risk management is not just good for business, but is absolutely necessary in order for tourism and related organizations to remain competitive, to be sustainable, and to be responsible for their collective future' (cunliffe, 2006: 35) . resort management risk not only involves both demand and supply considerations, it can range in scale from minor yet important internal issues like a lack of staff in crucial situations and places to overwhelming natural disasters or human external interventions like terrorism or financial crises. whatever form it takes the element of risk is ever present for resort management and some type of management structure needs to be in place to minimize its impact on the business. if no event or business decision within resort management is risk free, a risk management framework needs to take on a statistical probability structure. tarlow (2006) has suggested a useful framework would be one that considers the probability of an event and its likely consequences. figure 11 .1 provides some examples, using tarlow's suggested framework, but it should be noted that the consequences will vary according to each incident's severity and relevance to the resort product offerings. food poisoning is a serious occurrence for a resort because it means one duty of care has failed, ruining the visit of some guests and possibly closing a restaurant; but in the overall scheme of things, it has a low probability of occurrence and low consequences in a well-run establishment. the consequences are generally limited to some temporary bad publicity, financial compensation, a revision of safety procedures and possibly new equipment. this level of risk is discussed under the heading of 'security' within this chapter. accidents are presented in the form of personal injury, where the probability of occurrence can be high when a resort is associated with adventure tourism or dangerous locations. duty of care is still a major consideration, but if a guest chooses to undertake a risky activity they are expected to assume some of that risk. under these circumstances resorts are expected to minimize the level of risk by preparing the site properly, instructing the guest where appropriate, and providing warning signs or professional help where warranted. this level of risk has been assigned a low consequences ranking in that it usually applies to individuals or small groups and through the implementation of 'risk management' these consequences can be minimized, but not eliminated. natural disasters have been a fact of life for the resort industry since its inception, with one of the earliest recorded disasters being the tarlow, 2006.) destruction of pompeii by the mount vesuvius volcano in ad79. natural disasters have high consequences because they cause severe damage and can destroy a resort or close it down for a long period. fortunately, their probability of occurrence is generally low. this form of risk is more difficult to anticipate so 'crisis management' is presented more as contingency planning, preparing for the worst in order to minimize its impact, especially on the loss of life. the weather had been classified as a high probability and consequence risk, because so many resorts are dependent on this feature yet it is something beyond their control. bad weather or even the threat of it can reduce visits and sales, but in this era of global warming the signs of severe weather stress are starting to have an impact. the increasing number of force 5 hurricanes is raising the insurance rates of all tropical resorts and not just those affected directly. the long-term drought in australia's alpine areas is creating poor snow seasons and raising questions about the ski industry's viability in these areas. such events do not have the sudden impact of a site-specific natural disaster, but they can have a major say on the long-term viability of a resort business. in this regard such risks are incorporated into the overall framework of 'sustainable management', where the evolving weather patterns are integrated into the long-term resource planning for a resort. at the basic level a resort's 'duty of care' requires it to ensure the safety of its guests within reasonable limits. possible threats to a guest's safety can arise from internal and external sources. internally, the design of facilities should include safety considerations along with their functional role, whether that involves a ski lift or a hotel balcony. staff should be trained to undertake their tasks safely, to keep an eye on guests and the general situation, and to note any external threats. these threats can include vandalism, theft or terrorism. the level of security will be determined by the perceived degree of threat in the local area, but some attention to this matter is necessary everywhere, for insurance and legal purposes alone. security of a resort involves three basic steps: 1. analyze and identify vulnerable areas/processes. resorts offer extensive and varied terrain with open and friendly access to welcome guests. they need to reduce the vulnerability of property and guests by identifying the weakest and most exposed elements of the property, and the riskiest regular activities of the guests. 2. establish security priorities. as figure 11 .1 indicates not all risks warrant equal attention because their probabilities of occurrence and levels of impact on the operation will vary. in terms of basic security a differentiation should be made between public areas and private accommodation which ensures the privacy and security of guest accommodation can be maintained. in the restaurants and food outlets the storing, cooking and presentation of food must be undertaken with respect to health and safety regulations. in the most popular recreational areas like swimming pools and play areas, there must be qualified supervision. in guest rooms there should be a smoke alarm and sprinkler system to protect guests and the resort investment. 3. organization of a security system. the combination of guest enjoyment and security often requires a delicate touch in a resort environment, so as not to spoil the vacation experience. this means security should be present, but as invisible as possible. in a casino resort the presence of occasional guards can be re-assuring, but most of the security surveillance is conducted by closed circuit television (cctv). in other resorts there may be patrols for the grounds, especially at night, alarms in sensitive areas and instructions to staff and guests about possible dangers. in-room security is aided by instructions via notices or local hotel channel on the television, and an in-room safe. when resorts are exclusive in terms of being up-market, providing extensive facilities and attracting wealthy guests they need to be particularly concerned with security because they can become a magnet for criminal activity and litigious activity. for example, the jalousie plantation resort and spa in the island of st lucia would offer a tempting target according to the description offered by pattullo (1996: a security system is only as good as its staff and cannot operate effectively in isolation from general staff and guests. resorts can either hire their own security staff or outsource the responsibility to a professional company. regardless of the approach used it must be integrated into the daily operations in such a way as to be effective yet undetected. 'management should view the cost of developing a security training program as an investment. a resort in which all employees are attuned to the safety and security concerns can create a safer environment for guests and employees and a more profitable operation in the long run' (gee, 1996: 415) . there are several aspects to security planning: in major resorts it is now common to hire a professional security company to provide coverage for key areas and assets. in addition to being skilled in their task the individuals selected for resorts need to be presentable and able to interact with guests. just like disney theme park street sweepers are trained to know about their theme park and emergency procedures, resort security will find themselves called upon for directions and advice by the guests. ■ staff training. even if a resort has a professional security arm it needs to include general staff in its security planning. they should be knowledgeable about basic procedures and able to advise guests. they should learn to keep their ears and eyes open for trouble. ■ records and reports. recording what happens is vital because it can help identify danger spots and become invaluable evidence for insurance and litigation purposes. there are several types of reporting mechanisms; daily activity report, general incident report, loss report, accident report, monthly statistical report. failure to fulfil the 'duty of care' responsibility may result in a security related liability law suit. in a suit alleging negligence, the plaintiff (the accuser) must show the defendant (the resort) failed to provide 'reasonable care' regarding 'foreseeable' acts or situations. judges often apportion blame for an accident. that is a certain percentage of the blame is seen as the responsibility of the operator and the remainder the responsibility of the guest. damages associated with negligence can be of two types: 1. compensatory damages: to compensate for loss of income, for pain and suffering. 2. punitive damages: to inflict punishment for outrageous conduct and to act as a lesson for others (setting precedence). examples of the legal consequences from insufficient attention paid to 'duty of care' abound, but in many cases involving private companies the cases are settled out-of-court with minimum publicity. to illustrate what can happen with regard to apportioned blame and the challenges in safeguarding tourists, the following two published accounts of australian cases are presented. the first involves a young man, who like many before him went to swim at a local waterhole in the murray river. he dived from a log in the waterhole and struck the riverbed, suffered permanent spinal injuries and sued the berrigan shire council and forestry commission of nsw for a$8 million in damages.'both defendants denied liability, but agreed that damages should be assessed at a$8.2 million' (gregory and hewitt, 2005: 6) . in the original judgement the presiding judge reduced the assessment by 32 per cent to take account of the plaintiff's share of responsibility, through contributory negligence. for the remaining a$5.6 million he ordered the council to pay 80 per cent and the commission 20 per cent. as often happens in such cases involving large sums this 2004 judgement was appealed. in 2005 a new judge upheld the decision but placed all the blame and financial responsibility on the council. in his summary the new judge said: council employees were aware of people diving from the log, and the changes to the riverbed that floods could cause. he said the council has means and opportunity to put up warning signs, and in the longer term, to remove the log. justice nettle said the council owed (the plaintiff) a specific duty (of care) to take reasonable steps to guard against the risk of harm resulting from the use of the log for diving. but he said the commission had a very different charter and purpose (responsible for managing the forest alongside the river), and arguably no actual knowledge of the use of the log (as a diving platform). (gregory and hewitt, 2005: 6) the second case involves a man who on a warm day in 1997 went to sydney's bondi beach for a swim and like a responsible australian risk management waded into the sea 'between the red-and-yellow "safe swimming" flags, (where) he dived under a foaming wave and collided with a sand bar' (feizkhah, 2002: 46) . by 2005, the plaintiff who is now a quadriplegic as a result of this incident, claimed: waverley council's life-guards should have put the flags in a different spot or installed warning signs. perhaps 'a fellow diving and a cross through it or some words saying "sand banks"'. last week a jury agreed, ordering the council to pay (the plaintiff) a$3.75 million. (feizkhah, 2002: 46) this judgement not only cost a local council a great deal of money, it sent shock waves through the industry because it meant standard procedures had failed to demonstrate sufficient 'duty of care' in the eyes of the law. the wider implication of this case and others like it has been an increase in claims for negligence and a rise in public liability insurance. feizkhah (2002: 46) reports: between 1998 and 2000, the number of public liability claims australiawide rose by 60% to 88 000: total payouts rose by 52% to a$724 million. most claims are settled out of court for less than a$20 000. but, 'there is a jackpot mentality, where people with minor injuries see reports of big payouts and see if they can get something too'. one of the most affected tourism activities in this regard has been adventure tourism, an activity closely associated with resorts whose owners, such as international chains and public companies, are often viewed as possessing deep pockets. claims in these activities and areas have been increasing over a long period and have been associated with rising public liability insurance costs to cover not just recorded claims; but to help cover the broader costs of global insurance increases. given its importance as a major attraction for many resorts and as a prime source of risk and insurance claims, adventure tourism deserves special attention. depending on the level of risk and size of insurance claims, it can vary from a general security issue to a risk management issue. as can be seen in table 11 .1 representing the insurance claims for a whole state, the number and amounts are relatively minor, although they can be crippling for small businesses with limited resources. when accidents and claims involve major incidents, with extensive pain and suffering, loss of the business of resort management income and possibly life there is a need for more extensive risk management and control as will be discussed in the next section. adventure tourism is a very general term and hence a very inclusive subset of tourism, including a large array of activities. the term implies excitement and a change from normal daily life by pursuing an activity in a different environment. adventure tourism can take many forms because three dimensions have been linked to its structure (page et al., 2005) . these dimensions involve the following characteristics, with an indication in the brackets of who is the major player: the amount of physical effort a person is prepared to put into the activity is a major feature of adventure tourism. in terms of an active situation the guest is an active participant who, with or without the help of a guide or instructor, is looking for excitement and an adrenalin rush. in a passive situation the guest is a spectator or observer, one who wishes to learn more about the world around them rather than about themselves and their personal limits. ■ hard-soft dimension (business). these are the categories applied by the industry and relate more to the degree of preparation and pampering the guest requires for the activity. a 'soft' activity is one where the guest is able to view the scenery or wildlife from a safe vantage point with low risk of injury. a 'hard' activity is more risky because the guest participates directly in the activity in order to obtain that adrenalin rush and requires more individual attention, before (preparation) and during (guidance) the activity. ■ high risk-low risk dimension (business-guest). this is where guest perception and business management create the preferred and real tourism experience. beedie (2003: 206) notes correctly that a paradox has been created, 'whereby the more detailed, planned and logistically smooth an adventure tourist itinerary becomes, the more removed the experience is from the notion of adventure'. this helps to explain why 'injury rates do not necessarily conform to the notion of perceived risk', with some soft activities having substantial injury and death rates while some hard activities have far fewer than commonly expected. in bringing the guest's desires and expectations together with the resort's prepared and staged offerings, adventure tourism has been seen as a natural business opportunity by some (cloutier, 2003) and a commodifiction of the human spirit by others (beedie, 2003) . regardless of which interpretation is correct, for resorts it provides a varied and profitable business mix (figure 11 .2). many of the references quoted in this section have extensive lists of activities cited as adventure tourism, but most shy away from classifying them because whatever groups are selected cannot be mutually exclusive given the three dimensional characteristics and the varying conditions under which they operate. for example, skiings' rating would be influenced by its location, whether it be in mountain areas with steep slopes and tricky runs or on the gentler beginner slopes attached to some resorts and urban centres. it would also depend on whether we are considering downhill or cross-country skiing, and of course on the skill and experience of the individual skier. when resorts present hard adventure tourism products they face the challenge of providing an exciting adrenalin provoking experience in the safest way possible. in many tropical resorts scuba diving is one of the key attractions, and although not physically demanding does require a reasonable level of health and fitness. wilks and davis (2000: 596) observe, however, that a review of 100 consecutive scuba diving deaths found 'that in 25% of the fatalities there was a pre-existing medical contraindication to scuba diving' and those people should not have risked a dive. in this respect most dive companies rely heavily on personal honesty when guests fill in preliminary medical questionnaires, and at times this trust is abused through bravado and peer pressure. at other times the bravado and peer pressure can be laid at the door of the adventure tourism company. this was one accusation and explanation offered in relation to the canyoning tragedy in switzerland that claimed 21 lives in 1999 (head, 1999: 3) . according to morgan and fluker (2000: 2) : clearly the risks associated with this incident were beyond the capabilities of participants. importantly, newspaper reports of the interlaken tragedy speculated that early warning signs of danger had been ignored by the activity's guides. expert opinions of experienced river guides were also reported. these reports expressed serious concerns that the interlaken river guides employed by the adventure company may have been under pressure to put profits before safety, this being compounded by their lack of knowledge of local conditions. the judge apparently agreed, stating the 'safety measures taken by the now defunct adventure world were totally inadequate -with no proper safety training for the employees' (bbc news, 2001) . something that should have been undertaken before hand, as part of a risk management process. risk management is a way in which to prepare for the security risk and crisis issues outlined above. it is becoming a significant aspect of resort risk management management given the adventurous nature of many of their activities, their exciting locations, the growing litigious nature of customers, and the growing threat of terrorism. risk management should incorporate the following, which involves considerable overlap with the previous security planning. the difference being that risk management is more comprehensive, including environmental concerns and financial considerations as well as human safety. the unique characteristic of adventure tourism, and those resorts offering that type of product, is that 'participants are deliberately seeking and/or accepting the chance of sustaining physical injury' (morgan and fluker, 2000: 4) . this means that for adventure clients perceived risk becomes an important part of the adventure experience, while for the commercial operator the actual and managed level of risk is the real risk as shown in figure 11 .2. when guests pay money for the specialized knowledge, skills and equipment of the commercial provider, 'they reduce their need for risk awareness and responsibility. this transfer of risk responsibility to an activity operator, arising from the tourist's financial consideration (contract), raises a number of legal and ethical issues' (morgan and fluker, 2000: 3) . the legal issues revolve around duty of care and individual responsibility, the ethical issues include the paradox that 'accidents can add to the allure of the adventure experience through providing a valid testimony of the risk' (morgan and fluker, 2000: 9) . risk management is a rational approach to deal with real risk. it is about managing risk rather than eliminating it because as we have seen some degree of perceived risk is inherent in adventure tourism and many resort locations. but 'it is important to grasp the concept that the level of risk management applied is relative to the tolerance of a specific business and its guests for risk, which can vary substantially from one operator to another' (cloutier, 2000: 96) . gee (1996: 437-440) has identified a general process, that can assist resorts in their management of risk for both adventure tourism and general operations which consists of four steps. risk is associated with all aspects of business, and like the adventure tourist that is part of the thrill for many entrepreneurs and business people. adventure tourism operations must be identified in terms of their real risk, and even when they are outsourced to separate organizations with their own liability insurance, their professionalism and record will still impact on a resort's reputation and business. asset risks involve the major investment in property and facilities that need to be protected. identifying areas of particular danger and hazard is an important first step, such as the buildup of under growth and leaf-litter in woodland areas; the presence of currents or steeply shelving beaches along the beach-front; the physical dangers associated with wastewater treatment plants and with electrical substations; and the ever present danger of fire when people are relaxed and having fun. income risks are a major concern for resorts, which have a high dependence on external conditions often beyond their immediate control. anderson (2006 anderson ( : 1290 in the introduction to her article on crisis management in the australian tourism industry lays out a catalogue of disasters that have befallen that country's tourism over the past 20 years: ■ 1989 -pilot's strike ■ 1991 -gulf war ■ 1997 -asian economic crisis ■ 2000 -dot com crash ■ 2001 -collapse of hih insurance company (which was the major public liability insurer in australia and with its demise there were major increases in insurance premiums for everyone). -world trade centre attacks -demise of ansett airlines (which had a 35 per cent market share of the domestic airline business at the time). ■ 2002 -bali bombings which killed 202 people ■ 2003 -iraq war -outbreak of severe acute respiratory syndrome (sars) as if this is not enough some countries could add the outbreak of foot and mouth disease, avian flu epidemics, further terrorists attacks and unreliable weather. although in australia and most countries tourism has recovered from such experiences, the industry has learned valuable lessons along the way. one has been to recognize the potential loss of business that can occur through interruption or damage, and to prepare for it through contingency planning and market diversification. legal liability risks are increasing as society becomes more litigious. resorts as businesses are responsible to their guests, employees and shareholders, all of whom are better educated regarding their rights and are more prepared and able to exercise those rights in court if need be. now liability insurance is a major cost factor that all businesses must consider, including resorts. loss of key personnel risks are often under-appreciated until such a situation occurs and the resort discovers how much a certain individual contributed to the business' overall attraction. a key person like a chef, an entertainer or instructor whose skills and special qualities are hard to replace can leave a big hole in a resort's reputation. these people should be identified and retained wherever possible, and if they are lost to illness or poaching then succession plans should be in place. to control the frequency and magnitude of losses due to risk it is essential to develop recording procedures and to create a repository of past records. detailed record keeping is a key to identifying where and when risks are occurring, and the staff who were or should be involved. if personal injury is involved it is particularly important to obtain independent witnesses to the incident, in case there are later legal or insurance claims. such data should be recorded in a central registry on a daily and weekly basis to be deposited in an appropriate computer database.this will provide important information regarding the safety record, or otherwise, of individual operations and the total resort. such data will prove useful when negotiating liability insurance or to demonstrate the resort's actual duty of care record. 3. risk reduction. business, like life, is never risk free so in designing and operating a resort one important emphasis is safety -for guests and staff. many of the common dangers like food, health and safety and fire are regulated and controlled by local by-laws or ordinances. however, such statutes often involve the 'minimum acceptable' precautions, so a resort may choose to follow the walt disney world lead and select higher standards. this will mean higher initial building costs, however it should reduce both the associated risks and annual insurance premiums. 'one of the most rewarding loss-control projects is training personnel to think in terms of accident and loss avoidance' according to gee (1996: 439) . this becomes particularly important in the operations phase and needs to be emphasized as part of the resort's duty of care. when accidents occur they become significant 'moments of truth' for the guests and if they are handled in an empathetic and professional manner many later difficulties can be avoided. most businesses, including resorts can absorb small and infrequent losses brought on by seasonal fluctuations or occasional accidents, but will need to transfer the risk of large business interruptions or liability claims to outside suppliers of such coverage, such as insurance companies or brokers. small losses and claims will still be a matter for management's discretion even when they have insurance coverage, due to the deduction or excess clause associated with their insurance premium. most personal injury and damage claims can be accommodated within these parameters and involve negotiation with the affected parties rather than court cases, but if it becomes a major contentious court case insurance providers will become involved. resort owners can also transfer risk to other parties via non-insurance responsibility. in terms of major recreation equipment, like ski lifts or spas, the supplier can be encouraged to guarantee the safety of its equipment if it is properly installed and operated. this often involves the supplier installing the equipment and certifying the operators. in terms of more hazardous adventure tourism activities the resort can sub-contract these activities to specialist operators who carry their own and independent insurance policies. cloutier (2000: 104-105) provides some insights into actual risk management, as seen through the eyes of will leverette, who offers simple guidelines based on the real experience of lawsuits and consultations with adventure tourism companies. according to leverette there are six basic rules to follow: 1. develop a means to prove that guests were adequately warned and informed. 2. any guarantee of safety made in a business' literature or marketing materials is an open invitation to be sued. 3. all field staff must have current training in basic first aid. 4. the business should develop a written emergency/evacuation plan for all areas and activities to be used. 5. one good witness statement will shut down a frivolous lawsuit faster, more cheaply and less painfully than will anything else. 6. the business must use a properly drafted liability-release form. (this author's emphases) such personal experiences are a guide as to how today's risk management is evolving into a legal discourse over duty of care, but when there is a major disruption to business through some form of disaster the emphasis changes from prevention to the rescue and recovery of crisis management. 'crisis management is an extension of risk management' according to the pacific asia travel association (pata, 2003: 3) . risk management can be viewed as management initiatives designed to minimize loss through poor decision-making; but it can also be viewed as an important proactive step in reducing the dangers of catastrophic business collapse due to crisis or disaster. the pata booklet presents a 'four r' step process to crisis management: 1. reduction -detecting early warning signs; 2. readiness -preparing plans and running exercises; 3. response -executing operational and communication plans in a crisis situation; 4. recovery -returning the organization to normal after a crisis; where risk management practices would dominate the first two steps. risk management procedures that help to identify safety and security weaknesses in an operation will not only help to minimize danger and loss, they will expose the weak points in case a crisis occurs. crisis in a literate sense represents a moment of acute danger or difficulty, which in terms of the tourism industry has been defined as: an unwanted, unusual situation for an organization, which, due to the seriousness of the event, demands an immediate entrepreneurial response. (glaesser, 2003: 8) this approach places the emphasis of crisis management on the response and recovery phases of a crisis and brings it into line with disaster planning which has its own four-stage process of: assessment-warningimpact-recovery (foster, 1980) . it is natural disasters which often trigger crisis within the tourism industry be they earthquakes (kyoto -1995) , volcanic eruptions (mount st helens -1980), forest fires (yellowstone national park -1988), tsunami (phuket -2004) or hurricanes (new orleans -2005) . one should not forget that human beings can and do create their own crises for tourism and the resort industry. wars, terrorism and political decisions can bring about dramatic declines in visitation due to the prospect of danger or the removal of access. king and berno (2002) provide a good example of this in their analysis of the impact of fiji's two coups in 1987 and 2000 on the local tourist trade, which is heavily resort oriented. they note that: like many other tropical island nations, fiji has long established procedures in place to deal with emergencies such as cyclones. this meant that the overall preparedness in 1987 was relatively good with provision to contact hotels across the country with advice on how to react (king and berno, 2002: 49) such an experience demonstrates strong synergistic links can exist between natural disaster and human crisis management. from past crisis experiences ritchie (2004) sets out a strategic framework for the planning and management of crises by public or private organizations. his model outlines three main stages in managing such incidents -prevention and planning, implementation, evaluation and feedback. ritchie's three strategic management stages, along with faulkner's earlier crisis stages and their ingredients, has been combined with more resort oriented issues and actions in table 11 .2. if the process is divided into the three phases of pre-, actual and post-crisis it is possible to determine a clear pattern of events and responsibilities for resorts. pre-crisis for most resorts will be some form of preparation for a future possible disaster. whether that be an 'inevitable' physical disaster like an earthquake in certain regions or a possible negative political change, such major 'unthinkable' events need to be considered and prepared for. the recommended and common approach at this phase is to develop contingency plans -plans for actions that will mitigate the disaster's effects. this involves recognizing the potential scale and frequency of the expected disaster, and planning accordingly. in terms of the guests, all staff must be trained in emergency procedures and have a role to fulfil. in most cases there will be an obvious overlap with regular security measures such as fire drills, but in terms of major disaster preparation additional factors will need to be considered and plans prepared. for human created crises the degree of notice may be shorter than for natural disasters but there will usually be a warning period of political unrest. in this case the experts will be political commentators and the global news services, and it will be up to owners and managers to keep abreast with their regional situations. more governments are becoming involved in this scanning process on behalf of their travelling citizens, and are offering travel advisories. for many resort destinations these advisories have both positive and negative features. on the positive side they provide up-to-date and comparable risk assessments for tourists; on the negative side they are susceptible to political influence and may not be as objective as one would hope; plus they paint the risk picture with broad strokes, so that relatively safe enclaves become included in the national summary. glaesser (2003: 131-132) provides an example of how one crisis, the bali bombing of 12 october 2002, produced various levels of advisory notice in europe, ranging from a low level general security advice to warnings against travel under any circumstances. during the actual crisis there is going to be chaos and confusion and it is the prepared and cool-headed who will prevail. hence the key management task at this stage is to have staff able to implement the contingency plan and to be empowered to show initiative when conditions do not exactly follow the predicted pattern of events. a major disaster or crisis is likely to affect more than one resort or destination, so collaboration with other tourism organizations and government agencies will be essential. hopefully in the imminent stage all or most guests would have been evacuated with the help of public agencies and industry partners to safer areas, but generally a skeleton staff of key personnel will be required to stay on site as long as possible to ensure the safety of assets. one of the biggest challenges during this period will be media relations, for in today's global village a disaster or crisis attracts the attention of the world and a media frenzy erupts. glaesser (2003: 223) in his discussion of communications policy with respect to a crisis states 'the principle (sic) task is to convey information with the aim of influencing and guiding consumer behaviour, opinions and expectations'. to achieve this he advocates the affected organization creates a quick understanding of the situation and a transparency in its preparation and response, to build the credibility of the business. he recommends (glaesser, 2003: 230-231) a communication process that follows the sequence of: 1. portray the dismay and responsibility of the organization. 2. describe decisions and measures introduced to cope with the crisis. 3. indicate, based on the current experience, what further measures will be introduced to avoid future repetitions. in the case of major disasters and crises resort destinations and businesses will need to collaborate with central and regional governments, for it is they alone who can mobilize the resources needed to handle major catastrophes.the evacuation of guests to other areas will require bus and truck transportation and possibly air evacuation. the chaos and confusion of a crisis provides the opportunity for crime and lawlessness, so governments will need to bolster regular police forces with federal police and possibly troops. in the case of the new orleans hurricane of 2005 such a government response was widely criticized for its slowness and inadequacy. since much of the world's resort business occurs in the developing world when a major disaster or crisis occurs the host communities often need international assistance. it is at such times that the international community has revealed its better nature, ignoring old differences to come to the aid of fellow humans suffering from the ravages of nature or the actions of a few. unfortunately, the generosity of individuals, charities and nations is not generally put to the best use. partly this is because many receiving nations have neither their own contingency plans for such disasters nor the organization and infrastructure to handle a major in-pouring of generosity; and partly because some of the recovering nations have been susceptible to pilfering and corruption, which results in funds and materials not reaching the designated destinations and sufferers. given the occurrence of irregularities and disappointments with various international aid programs, it is not surprising that their recovery period is often much longer and more difficult than many expect. the post-crisis phase occurs when the actual crisis has abated and is out of the international news headlines, and represents the business and community efforts to return to pre-crisis conditions. as many students of disaster/crisis management indicate this is not only the time to get back to business as quickly as possible, it is an opportunity to redevelop and learn from the mistakes of the past. an immediate concern is to take advantage of the media attention that will have placed the resort destination in the world headlines, by demonstrating that the crisis has passed and life is returning to normal. it is quite likely that the global public has been presented an inaccurate and exaggerated picture of local devastation, which needs to be remedied as quickly as possible. for example, when hurricane iwa struck the hawaiian island of kauai in november 1982 some international news media were reporting that part of the island had sunk and thousands of people had died, which was far from the truth, although the hurricane caused a lot of physical damage (murphy and bayley, 1989: 38) . such stories need to be refuted, and information about the undamaged areas and recovery put in their place wherever possible. when it comes to repairing the damage an opportunity is presented to upgrade a resort's infrastructure and facilities. there will always be improvements (real or imagined) that the consumer society has developed since the original building of a resort destination which can and should be integrated into the new resort. there will be lessons learned from the disaster or crisis that can be incorporated into the design of the new resort which will make it more disaster-proof in the future. one example of how a disaster can be turned into a positive for tourism occurred with the mount st helens eruption. prior to the eruption mount st helens was a relatively quite tourist attraction, appealing mostly to outdoor recreationists and offering mainly basic facilities. the publicity of the 1980 eruption, including dramatic television coverage, increased interest in the volcano to the extent that 'a national monument was created by setting off 110 000 acres from the existing national forest to commemorate the eruption . . . a new visitor center was opened in december 1986, with illustrations and other graphics that depict the events and the subsequent natural regeneration of the devastated areas' (murphy and bayley, 1989: 42) . this new visitor centre has good access to the local interstate freeway and now many more visitors are drawn to the area, incorporating a wider range of tourist types and market segments than before. as has been mentioned, life is a risk and as we modify the earth's environment we are creating new and uncharted conditions that will bring increased risk. signs that business conditions are changing come in various guises. nature seems to be going through a period of instability, with evidence of climate change beginning to take on more significance as scientific evidence points more to a global shift in weather patterns rather than normal climatic cycles. climate change is a serious and urgent issue. while climate change and climate modeling are subject to inherent uncertainties, it is clear that human activities have a powerful role in influencing the climate and the risks and scale of impacts in the future. all science implies a strong likelihood that, if emissions continue unabated, the world will experience a radical transformation of its climate. (stern, 2006: 17) even the conservative periodical the economist has come to the conclusion that 'the chances of serious consequences are high enough to make it worth spending the 'not exorbitant' sums needed to try to mitigate climate change' (the economist, 2006b: 5). resort tourism is particularly vulnerable to climatic change given that many resorts are located in high risk areas like mountains and tropical beaches, and it uses high levels of energy drawing in its guests and on-site water to keep them happy. war has been joined by global terrorism as a major disruption and deterrent to travel, with tourists seen as 'soft targets' bringing maximum exposure to the terrorist cause. an evercrowded world with over-stretched medical systems appears to be waiting for the next pandemic, with the recent outbreaks of sars (2003) and avian flu (2006) revealing a certain lack of control and openness in dealing with global health crises. under these circumstances one of the most direct business signs of change has been the dramatic increase in insurance premiums that everyone seems to have faced in this new millennium. increased liability insurance has put some single owner peripheral tourism operations out of business, regular security insurance as well as liability insurance has risen dramatically for resorts, and resort destinations in vulnerable locations are facing either dramatic increases in premiums or the loss of direct insurance. after a disastrous 2005, with insured losses of $55 billion in the us, of which $38 billion was caused by hurricane katrina,american insurers 'are cutting back their exposure in coastal areas . . . home owners who can get insurance coverage face sharply higher rates. some premiums have risen by as much as 200% . . . many residents cannot get private coverage at all. as a result, state -backed insurance plans, meant to provide coverage as a last resort, are being inundated' (the economist, 2006a: 74) . this is only the tip of the iceberg according to figures provided by winn and kirchgeorg (2005: 245) who quote table 11 .3 from the topics geo 2003 report, which shows 'the number of natural catastrophes rose nearly five fold (and) economic losses nearly 16 fold over the last five decades'. winn and kirchgeorg use such information to suggest business in general needs to rethink its strategic approach to the environment and the business of resort management sustainable development. they view past and present management interest in environmental management and sustainable development as an 'inside-out' approach, 'one in which the primary perspective is to look from the firm out at the external environment' (winn and kirchgeorg, 2005: 240) that includes ecological and social considerations. but given the dramatic external forces in nature, politics and health which are leading to new levels of uncertainty in the ecological and societal realms, they see the need for 'a radical departure from the inside-out perspective of environmental management and its more systems theory-informed cousin from sustainability management' to one where sustainability management should be 'expanded and complemented, and may even need to be substituted by conceptual frameworks fairly new to organization theories, such as "resilience management" or "discontinuity management"' (winn and kirchgeorg, 2005: 250) . this is because if we are facing significant shifts in environmental and political conditions, the balancing nature of sustainable development will no longer apply in an unstable world. rather, business will need to take on board the possibility or probability of structural shifts and the prospect of facing several global emergencies during their lifetime. 'since ecological global systems cannot be affected significantly by actors in the short-term (the inside-out approach), broader adaptive behaviors that secure the survivability of the economy and society become increasingly relevant. crisis management, risk management, and emergency responses need to be supplemented with long term management for survival' (winn and kirchgeorg, 2005: 252) . to put such thoughts into practice will require guidelines that incorporate all the knowledge that has been accumulated to date on risk and crisis management to be supplemented by a broader and more collaborative decade 1950 -1960 -1970 -1980 -1990 -last 10 1959 1969 1979 1989 1999 approach to business survival and sustainability. given the noted exposure of tourism to the environmental, political and medical forces that seem to be in flux, it is not surprising to find some are already thinking along these lines. santana (2003: 304) considers that if decision-makers acknowledge that in these complex and unpredictable times in which we live and operate, anything is possible, including a major crisis that may prove devastating to their organizations, management will be in the 'right frame of mind' to accept the contention that forms the basic foundation of crisis management; proper advanced planning. as santana points out, with the increasing impact of external forces on tourism operations, crisis should be looked upon as an evolving process in itself, one that develops its own logic and consequences, rather than be treated as an isolated event. 'it is the degree to which management heeds the warning signals and prepares the organization (that) will determine how well it responds to the impending crisis' (santana, 2003: 310) . likewise, hall et al. (2003: 2) note that tourism and destinations are 'deeply affected by the perceptions of security and the management of safety, security and risk'. they think the concept of security has broadened significantly since the end of the cold war, with a dominant single political enemy being replaced by terrorism, wars of independence, indigenous rights, and religious differences. 'security ideas now tend to stress more global and peoplecentered perspectives with greater emphasis on the multilateral frameworks of security management' (hall et al., 2003: 12) . one of the few at this point to provide practical guidelines for sustainable crisis management in tourism is de sausmarez (2004) . de sausmarez maintains that many future crises will require careful detection and collaborative efforts to minimize their impact, and she has outlined a six step approach to tackle such major external threats. we, as students of tourism, know tourism and resorts are important socio-economic functions for many people, communities and nations, but we cannot assume others give tourism the same degree of importance in the bigger picture and more comprehensive view of events, including crises. hence, the first step in the establishment of a national or regional crisis management policy is to determine and demonstrate the relative importance of the tourism and resort sectors. until this has been done, it is impossible to prepare any sound strategy for the response to a crisis. this was well illustrated by sharpley the business of resort management and craven (2001) who show that even though tourism contributes substantially more to the british economy than agriculture does, the british government's response to the foot and mouth crisis in 2001 was to slaughter rather than vaccinated animals and to close the countryside to visitors, moves which favoured the agricultural sector rather than the tourism sector and cost the taxpayer substantially more than was necessary. (de sausmarez, 2004: 4) it is only when tourism in general and the resort component in particular are shown to be significant local and regional socioeconomic activities that governments and planners will consider them seriously and integrate their needs into macro-crisis management planning. if resorts and tourism are to integrate crisis management with their sustainable development philosophy they will need to identify the anticipated areas of greatest risk. in the literature and this chapter the emphasis has been on natural disasters, which are essentially supply side characteristics as they change or eliminate the attractiveness of a destination. however, just as important are demand side characteristics such as international political relations affecting visa requirements, economic conditions affecting the ability to travel, world health and safety, and competition from other destinations and leisure activities. although none of these supply and demand risks will fall under the direct control of resort management, knowledge of their existence and development will be essential for future strategic planning, and should be used to lobby government. while it is important to scan the environment continuously it is important to be able to measure trends in a relevant and timely manner. the evidence of global warming is building momentum, but it is often sending out confusing and at times conflicting information, little of which has any bearing on a single location or site. managers and owners of resort properties need to know what this impending crisis means for them specifically. de sausmarez (2004: 7) maintains tour operators and travel agents, along with government agencies, are in a 'strong strategic position to monitor and assess changes in the tourism status quo as they have access to data on both supply and demand'. she notes that the world tourism organisation's (1999) recommendation after the asian financial crisis of 1997 was that destinations should develop three categories of indicators, to warn of impending crises. i. short-term indicators of up to three months, that include advance bookings from key markets, or an increase in the usual length of time needed to settle accounts. ii. medium-term indicators, with a lead time of 3-12 months such as that needed for tour operator allocations and take-up to be recorded. iii. long-term indicators, with a lead time of a year or more that include planned capacity developments, international currency valuations and trends in gdp, interest rates and inflation in key markets. to which sustainability crisis management would add a fourth category: iv. future indicators, with a lead time of 10-50 years which covers the life of most mortgages and leases and provides sufficient time to determine whether the current climatic experiences are long-term phenomena or cyclical aberrations. these indicators would subjectively convert the environmental, political, business and health trends into local and more useful indices. they would be subjective because it would depend on local knowledge to disaggregate the global information meaningfully, and that process would be influenced by the outlook of the assessor -be they an optimist or pessimist. the type of global crises that tourism may be facing will be sufficiently large scale and evolving that they will require collaboration to implement an effective management strategy. this means that responsibilities and coordination plans need to be drawn up at an early stage and should cover three essential areas: a speedy response, appropriate measures in terms of the local needs of impacted areas and communication and coordination between different levels of jurisdiction and different sectors. 5. the development of a crisis plan. the development of a national crisis management plan is itself an example of macro-level proactive crisis management (de sausmarez, 2004: 9) , and a considerable achievement in itself. such plans need to be flexible as we can expect a series of different crises in the future, with varying local and regional impacts within national jurisdictions. an important part of any large scale crisis management plan will be media relations, relying on the various forms of media to distribute the relevant information as quickly and effectively as possible and being transparent about the severity of the crisis and remedies being undertaken. in some countries there may already be plans in place to cope with anticipated natural crisis such as cyclones (fiji) or earthquakes (pacific north west of america) that can be extended to include other forms of possible crisis. such plans would need regular re-assessment by government departments and the private sector, but can build an invaluable data bank and procedural map. in an asian context de sausmarez (2004: 11) feels communication and inter-agency cooperation needs to overcome the perceptual association between a crisis and 'loss of face', as is claimed to have occurred with the sars outbreak in east asia in 2003. in terms of global warming and its associated crisis no single country or person is to blame, we all need to take joint responsibility. 6. the potential for regional cooperation. although de sausmarez focusses on the creation of national crisis management she recognizes that global issues like climate change require an even larger operational scale. no single country can be isolated from its neighbors, as 'was clearly illustrated by the decline in tourism to southeast asia following the bali bombings in october 2002' (de sausmarez, 2004: 12) . she points out that the combined effort of the association of southeast asian nations (asean) was able to effectively contain the sars epidemic through regional preventative action in 2003, and that success has been repeated with the later outbreak of avian flu in east asia. the six steps outlined by de sausmarez do not follow a natural linear sequence, but should be viewed more as a continuous dynamic process which has been divided up to permit closer examination and appreciation of its component parts. the whole process depends on continual learning and adjustment, so as to be responsive and flexible in the face of future crises. for resorts which have survived as a separate form of tourism since the early days it becomes imperative to embrace risk and crisis management as a central part of their business strategy. this chapter has discussed the growing importance of risk management to resorts as our business and natural environments have changed. although financial risk has been a constant within business it is only in recent times, with the rise of a litigious society and a less stable natural environment, that it has become a more general and important issue for management. its increased prominence in business and society now means resorts should make it a key feature of their strategic management, and possibly their central concern. risk management does make a logical central theme for resort management in that it provides a focussed context for its past, present and future directions. the past experiences and business literature (part a) provide a guide as to what management may expect today, and the general level of risk associated with most options. present management must consider both external factors (part b) and internal strategies (part c) to create the most viable and sustainable options for today's resorts. the future can be extrapolated from the present if global business conditions change slowly and in a familiar manner, as predicted in the forecast for a growing senior's market for resorts. but what if we are experiencing major changes to the physical well-being of our planet and in human behaviour to one another? the risk factors that we have calculated from the past and present may no longer work so well or even apply, and we will need to enlarge our risk management process to incorporate more fluid business and environmental situations (part d). the purpose of this chapter and book is to encourage the reader to think about the wonderful legacy that has been provided by resorts; how we should strive to ensure resorts continue to delight our senses and educate us about our planet and its various cultures; and how we can achieve this through appropriate business management, even in this era of global change. a risk management focus would not only assist with the general sustainability objectives of a resort business, it can help position resorts at the forefront of monitoring and adjusting to the predicted changes in our natural and human environments. the chapter and book closes with an examination of one recent global crisis, which had a direct impact on resorts throughout a large area of the world, and one resort which learned a great deal from one hurricane season. the indian ocean tsunami of 2004 illustrates how we can still be caught unawares by a natural disaster, how such disasters can become international in scale, and thanks to rising sea levels may have even more significance in the future. walt disney world resort received its first hurricane direct hit in more than 40 years in mid-august 2004, only to have it followed by three others in the space of six weeks. in the process it learned some invaluable lessons. the indian ocean tsunami, also known in some quarters as the boxing day tsunami, occurred on 26 december 26 2004.this tsunami was generated by an earthquake under the indian ocean near the west coast of the indonesian island of sumatra, and is estimated to have released the energy of 23 000 hiroshima-type atomic bombs. by the end of that day 'more than 150 000 people were dead or missing and millions more were homeless in 11 countries, making it perhaps the most destructive tsunami in history' (national geographic news, 2005: 1). these figures were subsequently revised upward, so that now the indian ocean tsunami is estimated to have 'left 216 000 people dead or missing' (guardian unlimited, 2006: 1) . if this terrible natural disaster is examined using the threefold strategic action template of table 11 .2 certain key crisis management lessons emerge. at the pre-crisis stage little formal preparation had been undertaken at either government or resort levels of responsibility. this is understandable because there had been little history of major tsunamis in the indian ocean, the last being associated with krakatoa's eruption in 1868, but unforgivable for tsunamis are still a risk in oceans with volcanic and tectonic activity. what was missing was both an early warning system of seismic buoys and a way to convey that information to potentially threatened areas, so they could instigate evacuation plans. while the 'pacific tsunami warning centre in hawaii had sent an alert to 26 countries, including thailand and indonesia, (it) struggled to reach the right people.television and radio alerts were not issued in thailand until 9 a.m. local time -nearly an hour after the waves had hit' (global security, 2006: 3) . in this case there had been no regional forecasting or risk analysis and there was no internationally coordinated contingency plan to deal with such a situation. the result was that even if a coastal resort had its own evacuation plan there was nothing to trigger it until the arrival of the first wave, and by then it was too late. the actual crisis stage was viewed by millions of us around the globe, as we were able to view tourists' video camera images of this spectacular and unusual sunday morning feature on our television screens. the world press immediately brought us these graphic images to go along with the rising death and damage statistics, so that once again the selective reporting of a natural disaster convinced many that the whole region had been devastated. this was particularly the case with phuket island, where the images of destruction at patong beach on the west coast were transformed to represent the whole island in the minds of many, even case studies though phuket is a large island with many separate resort enclaves scattered around its varied shoreline and many of them were untouched by the tragedy. this confirms the need for control over communications, to ensure reporting remains factual and in proportion, rather than sensational and exploitive. the post-crisis stage represents an opportunity to learn from the crisis and to rebuild. this is certainly the case with the indian ocean tsunami. the biggest weakness was the lack of information and warning, which prevented the implementation of effective contingency planning. this is now being addressed with the building of the indian ocean tsunami warning system in 2006.this system has been coordinated by the united nations' educational, scientific and cultural organization (unesco) and consists of 25 new seismographic stations, supplemented by three deep-ocean sensors to provide the required early warning. but this is just the start, for the information needs to get to the areas around the indian ocean that are likely to be affected and the people in those areas who need to know what actions to take. therefore, unesco is continuing to work on international coordination and with governments to provide grassroots preparedness (terra daily, 2006) . unesco is providing expertise to assist with the redevelopment of mangrove, sea grass and coral reef rehabilitation; it is strengthening disaster preparation for cultural and heritage sites and integrating this into its reconstruction processes; and it is teaching tsunami awareness in schools, training decision-makers and broadcasters and staging local practice drills. the recovery is well underway around much of the indian ocean. in phuket where the damage was highly localized, patong beach showed no outward sign of the tsunami by october 2005, when the author paid a visit. the local tourism industry and english newspaper reported that while business had been slow in the months immediately following the tsunami, things had started to pick up around june and 'we expect it will be 80 per cent to 90 per cent from new year (2006) to the end of march (high season)' (phuket gazette, 2005: 3c) . another example of crisis recovery is provided in the maldives. like all low-lying islands the maldives are particularly susceptible to this form of disaster; thousands of local inhabitants lost their homes and 82 were killed in the tragedy. however, only two tourists lost their lives and although most resorts were damaged their 'higher construction standards (meant they) withstood the waves much better than local housing did' (travel wire news, 2005: 2) . consequently, it did not take most resorts long to rebuild and re-open, but in the process local businesses and government wanted to be better prepared for the future. five months after the tsunami swept across these islands in the indian ocean, the tourism sector and government agencies are cooperating to ensure that low-lying resorts and the nation's airport are better equipped to handle any type of emergency. (travel wire news, 2005: 2) among the changes proposed are improving communications through the installation of satellite telephones on each island and a centralized emergency information command. new resort regulations will require evacuation plans and emergency supplies. a higher seawall around the airport and safeguards for electrical power supplies are also being considered. these and other accounts of the indian ocean tsunami indicate the challenges facing the resort sector with today's concerns over global warming and the negative impact of news coverage for such disasters. major tsunamis are fortunately rare events, but this case has demonstrated the need for some international warning system, so that regional and local contingency plans can be put into operation to minimize the impact. this will clearly require coordination at government levels and the will to maintain vigilance and training over long time periods between natural disaster events -something that will test human nature to the full. one also has to ask, if future tsunamis are associated with the rising sea levels of global warming will such improvements be enough? this is the type of question that some academics and researchers are asking us to consider, and should certainly be examined in terms of the sustainability and risk management of many resorts and their relevance in an era of possible climatic shifts. this case is based on an article by barbara higgins (2005) who was director of operations integration for walt disney world resort when four hurricanes impacted the resort's operations in 2004, providing an invaluable learning opportunity for them and other resort operations. walt disney world's hurricane plans, as part of its general emergency planning, had definite priorities and procedures. priorities included (higgins, 2005: 41) : ■ keep guests safe; ■ keep employees safe; ■ have a thoughtful plan for tie-down, ride-out and recovery; and ■ provide the ability to get our parks open and operating as soon as possible after the storm. the procedures were designed to account for varying hurricane strengths, and whether the hurricane involved a direct hit or a near miss in terms of its path across central florida. to prepare for this walt disney world has instituted a five-phase approach to its hurricane preparedness, with each phase being selected in consultation with the national hurricane centre and local authorities. reviewing hurricane plans and verifying contact numbers for employees. further review of plans and beginning of preparation for possible shutdown of long-lead-time operations. predetermined emergency supplies are delivered, the site is cleared of loose materials and where relevant lightweight equipment and buildings are anchored to the ground, and managers evaluate moving to next phase. guests and essential staff take shelter in hurricane-proofed buildings or begin evacuation. all activities closed down, with only essential ride-out crews remaining in designated shelters. despite these plans and the thoroughness of preparation the sequence of four very different hurricanes revealed some additional factors and priorities. one major lesson from that summer's hurricane experience is that no two hurricanes are alike, so a resort can only prepare for hurricanes in general and not the specific one(s) that come its way.'the first lesson we learned was that our rigorous plans were only guidelines that needed to be flexible enough to adjust to changes dictated by our circumstances' (higgins, 2005: 42) . the most important elements in the general emergency plans turned out to be: ■ maintaining guest and employee communication, letting them know about the impending storm and providing the relevant information regarding each phase's action plan; ■ operating the food service, with the provision of hot meals being the biggest priority; ■ offering in-resort entertainment to guests who were room-bound for many hours; ■ preparing guests for confinement in their rooms over long periods, which is not what they came to the resort to do; ■ arranging for the ability to use news media to give (information on park closures and re-openings) and to get (weather details and various local government announcements regarding schools, police and emergency services). one 'important lesson to be learned in the face of a crisis is to show compassion for your employees and the toll the situation has had upon them, their families and their loved ones' (higgins, 2005: 45) . it is important to release unessential staff from their duties as soon as possible so they can attend to the safety of their family and homes as the hurricane approaches. likewise, in the aftermath it is likely some employees will require shelter and hot meals due to the hurricane damage. 'one lesson many floridians learned in the wake of these storms was the high deductible (excess) associated with hurricane insurance claims . . . (as a consequence) we anticipate providing more than $8 million to as many as ninety-five hundred employees who desperately need the funds to recover from the damage to their homes' (higgins, 2005: 45) . thus, in the end we have a reaffirmation that the business of resort management is 'to think globally but act locally'. although the driver is business and financial concerns, there needs to be an appreciation of the importance of the local environment and community to the long-term success of a resort. furthermore, if resorts are to continue to survive by adjusting to changing social and technical circumstances, they will need to become more proactive with regard to the current climate and cultural changes that face us all. crisis management in the australian tourism industry: preparedness, personnel and postscript six guilty in swiss canyoning trial adventure tourism legal liability and risk management in adventure tourism the business of adventure tourism risk management for tourism: origins and needs crisis management for the tourism sector: issues in policy development towards a framework for tourism disaster management disaster planning: the preservation of life and property resort development and management the business of resort management glaesser asian tsunami/tiger waves dive quadriplegic keeps his millions, plans to buy a house. the age asian nations stage tsunami drill security and tourism: towards a new understanding profit drive blamed for swiss canyon tragedy the storms of summer: lessons learned in the aftermath of the hurricanes of '04 tourism and civil disturbances: an evaluation of recovery strategies in fiji accidents in the adventure tourism industry: causes consequences and crisis management topics geo -annual review, national catastrophes tourism and disaster planning the deadliest tsunami in history tourist safety in new zealand and scotland crisis: it won't happen to last resorts: the cost of tourism in the caribbean chaos, crises and disasters: a strategic approach to crisis management in the tourism industry crisis management and tourism: beyond the rhetoric the 2001 foot and mouth crises -rural economy and tourism policy implications: a comment stern review on the economics of climate change. london: h.m. treasury.www.hmtreasury.gov.uk/independent_reviews/stern_review_ economics_climate disaster management: exploring ways to mitigate disasters before they occur indian ocean tsunami warning system up and running the price of sunshine: hurricanes and insurance. the economist the heat is on: a survey of climate change maldives takes steps to improve crisis response risk management for scuba diving operators on australia's great barrier reef the siesta is over: a rude awakening from sustainability myopia impacts of the financial crisis on asia's tourism sector. madrid: world tourism organisation key: cord-016405-86kghmzf authors: lai, allen yu-hung; tan, seck l. title: impact of disasters and disaster risk management in singapore: a case study of singapore’s experience in fighting the sars epidemic date: 2014-06-13 journal: resilience and recovery in asian disasters doi: 10.1007/978-4-431-55022-8_15 sha: doc_id: 16405 cord_uid: 86kghmzf singapore is vulnerable to both natural and man-made disasters alongside its remarkable economic growth. one of the most significant disasters in recent history was the severe acute respiratory syndrome (sars) epidemic in 2003. the sars outbreak was eventually contained through a series of risk mitigating measures introduced by the singapore government. this would not be possible without the engagement and responsiveness of the general public. this chapter begins with a description of singapore’s historical disaster profiles, the policy and legal framework in the all-hazard management approach. we use a case study to highlight the disaster impacts and insights drawn from singapore’s risk management experience with specific references to the sars epidemic. the implications from the sars focus on four areas: staying vigilant at the community level, remaining flexible in a national command structure, the demand for surge capacity, and collaborative governance at regional level. this chapter concludes with a presence of the flexible command structure on both the way and the extent it was utilized. situated in southeast asia yet outside the pacific rim of fire, singapore is fortunate enough to have been spared from major natural disasters such as typhoons, floods, volcanic eruptions, and earthquakes. however, this does not imply that singapore is safe, or immune from being affected by disasters. singapore houses a population of 5.2 million, a ranking of the third highest population density in the world. about 80 % of singapore's population resides in high-rise buildings (asian disaster reduction center 2005) . a major disaster of any sort could inflict mass casualties and extensive destruction to properties in singapore. clearly, like its neighboring countries, singapore is also vulnerable to both natural and man-made disasters alongside its remarkable economic growth. the potential risks may result from its dense population, intricate transportation network, or a transnational communicable disease. moreover, singapore can be affected by the situations in surrounding countries. for example, flooding in thailand and vietnam may affect the price of rice sold in singapore. indeed, singapore in her short history of 47 years has experienced a small number of disasters. chief among these, the severe acute respiratory syndrome (sars) epidemic in 2003 was the most devastating. the sars outbreak brought about far-reaching public health and economic consequences for the country as a whole. fortunately, the outbreak was eventually contained through a series of risk mitigating measures introduced by the singapore government and the responsiveness of all singaporeans. it is important to point out that these risk mitigating measures, along with the public's compliance, were swiftly adjusted to address the volatile conditions-such as when more epidemiological cases were uncovered. in this chapter, we introduce singapore's all-hazard management framework as well as the insights drawn from singapore's risk management experience with specific references to the sars epidemic. to achieve our research objective, we utilized a triangulation strategy of various research methodologies. to understand the principles and practices of singapore's approach to disaster risk management, we carry out an historical analysis of official documents obtained from the relevant singapore government agencies as well as international organizations, literature reviews, quantitative analysis of economic impacts, qualitative interviews with key informants (e.g. public health professionals and decision-makers), and email communications with frontline managers from the public sector (e.g. the singapore civil defense force, the communicable disease centre) and non-governmental organizations. the authors also employed the 'cultural insider' approach by participating in epidemic control procedures against sars. 1 in particular, we use the method of case study to illuminate singapore's approach to disaster risk management. the rationale of doing a case study of sars along with singapore's all-hazard approach is that the case study can best showcase the contextual differences, those being political, economic, and social. this case study aims to highlight the lessons drawn from past experiences in a specific context and timeframe, through which we are able to focus more on the nature of the risks, and the processes and the impacts of the disaster risk management and policy intervention. we also examined relevant literature on risk mitigating measures against communicable diseases in order to establish our conclusions. we evaluated oral accounts provided by key health policy decision-makers and experts for valuable insights. this chapter offers empirical evidence on the role of the whole-of-government approach to risk mitigation of the sars epidemic. applying the approach to a case study, our research enriches the vocabulary of risk management, adding to the body of knowledge on disaster management specific to the region of southeast asia. indeed, the dominant perspective in this field holds that the state must be able to exercise brute force and impose its will on the population (lai and tan 2012) . however, as shown in our paper, this dominant perspective is incomplete as the exercise of authority and power from the government is not necessarily sufficient to contain the transmission of transnational communicable diseases. success in fighting epidemics, as most would agree, is also contingent on a concerted effort of partnership between governmental authorities and the population at large. as discussed in the first section of this volume, community and family ties along with government responses can mitigate disasters. this chapter has four main sections. following this introduction, we provide an overview of singapore's historical disaster profiles. second, we introduce the policy and legal framework, and budgetary allocations for risk mitigation in singapore. third, we detail a case study of singapore's experience in fighting sars, as well as the impact of sars on singapore in its economic, healthcare, and psychosocial aspects. in the fourth section, we discuss the implications for practice and future research in disaster risk management, followed by conclusions. singapore has experienced a small number of disasters since it was founded in 1965. in this section, we briefly provide an historical account of singapore's disaster risk profiles including earthquakes, floods, epidemics, civil emergencies, and haze. singapore has a low risk of earthquakes and tsunamis. geographically, singapore is located in a low seismic-hazard region. however, the high-rise buildings that are built on soft-soil in singapore are still vulnerable to earthquakes from far afield (asian disaster reduction center 2005) . this is because singapore is at a distance (nearest) of 600 km from the sumatran subduction zone and 400 km away from the sumatra fault both of which have the potential of generating large magnitude earthquakes. this geographic vicinity may produce a resonance like situation within high-rise buildings on soft-soil. recent tremors from the september 2009 sumatra offshore earthquake were experienced in 234 buildings located mainly in the central, northern and western parts of singapore. on the front of potential tsunamis, singapore has developed a national tsunami response plan which is a multiagency government effort comprising of an early warning system, tsunami mitigation and emergency response plans, and public education. though singapore does not suffer from flood disasters due to the continuous drainage improvement works by the local authorities, the country has a risk of local flooding in some low-lying parts. the floods take place due to heavy rainfall that aggregates over short periods of time. the worst floods in singapore's history took place on 2 december 1978. the floods claimed seven lives, forced more than 1,000 people to be evacuated, and the total damages reached sgd10 million (tan 1978) . the swift and sudden floods in 1978 were caused by a combination of factors including torrential monsoon rains, drainage problems, and high incoming tides. over the following years, singapore saw a series of flash floods hit various parts of the city-state. for example, 2006 for example, -2007 southeast asian floods hit singapore on 18 december 2006 as a result of 366 mm rainfall in 24 h. from 2010 onwards, singapore has experienced a series of flash floods due to the higher-than-average rainfall. one severe episode occurred on16 june 2010 that flooded shopping malls and basement car parks in its most famous shopping area-orchard road. as per the reported historical disaster data from the cred international disaster database, singapore has suffered only two disaster events caused by epidemics. in 2000, singapore experienced its largest known outbreak of hand-foot-mouth disease (hfmd) which affected more than 3,000 young children, causing three deaths. later in 2003, sars hit singapore and it was singapore's most devastating disaster to date. the sars virus infected around 8,500 people worldwide and caused around 800 deaths. in singapore, sars infected 238 people, 33 of whom died of this contagious communicable disease. in 2009, novel avian influenza h1n1 struck singapore, which affected 1,348 people with 18 deaths. civil emergencies are defined as sudden incidents involving the loss of lives or damage to property on a large scale. they include (1) civil incidents such as bomb explosions, aircraft hijacks, terrorist hostage-taking, chemical, biological, radiological and explosive (cbre) agents and the release of radioactive materials by warships, and (2) civil emergencies, for example major fires, structural collapses, air crashes outside the airport boundary, and hazardous material incidents. in singapore, the singapore civil defense force (scdf) is responsible for civil emergencies. since 1965, singapore has experienced several episodes of civil emergencies. for example, the greek tanker spyros explosion at the jurong shipyard in 1978 was singapore's worst industrial disaster in terms of lives lost (ministry of labor, singapore 1979) . in 1986, the six-storey hotel new world collapse was singapore's deadliest civil disaster claiming 33 lives. the collapse was due to structural faults. the scdf, together with other rescue forces, spent 7 days on the whole relief operation. after the collapse, the government introduced more stringent regulations on construction building codes, and the scdf went through a series of upgrades in training and equipment (goh 2004 ). singapore experienced its first haze in the period of the end of august to the first week of november 1997 as a result of prevailing winds. the haze in 1997, called the southeast asian haze, was caused by slash and burn techniques adopted by farmers in indonesia. the smoke haze carried particulate matter that caused an increase of acute health effects including increased hospital visits due to respiratory distress such as asthma, pulmonary infection, as well as eye and skin irritation. the haze also severely affected visibility in addition to increasing health problems. as a result, singapore's health surveillance showed a 30 % increase in outpatient attendance for haze-related conditions (emmanuel 2000) . apart from healthcare costs, other costs associated with the haze included short-term tourism and production losses. a study by environmental economists of the 1997 southeast asian haze indicated a total of usd$74.1 million in economic losses in singapore alone. singapore is actively involved in various regional meetings to deal with transboundary smoke haze pollution in order to reduce the risk (singapore institute of international affairs 2006). the singapore government adopts a cross-ministerial policy framework-a wholeof-government integrated risk management (wog-irm), for disaster risk mitigation and disaster management (asia pacific economic cooperation 2011). this is a framework that aims to improve the risk awareness of all government agencies and the public, and helps to identify the full range of risks systematically. in addition, the framework identifies cross-agency risks that may have fallen through gaps in the system. this framework also includes medical response systems during emergencies, mass casualty management, risk reduction legislation for fire safety and hazardous materials, police operations, information and media management during crises and public-private partnerships in emergency preparedness. the wog-irm policy frame work in singapore functions in peacetime and in times of crisis. it refers to an approach that all relevant agencies work together in an established framework, with seamless communication and coordination to manage the risk (pereira 2008) . in peacetime, the home team comprises of four core agencies at central government level. these four agencies are the strategic planning office, the home front crisis ministerial committee (hcmc), the national security coordination secretariat, and the ministry of finance at the policy layer. among them, the strategic planning office provides oversight and guidance as the main platform to steer and review the overall progress of the wog-irm framework. during peacetime, the strategic planning office convenes meetings quarterly for the permanent secretaries from the various ministries across government. in a crisis, the home front crisis management system provides a "ministerial committee" responsible for all crisis situations in singapore. in the wog-irm structure, the hcmc is led by the ministry of home affairs (mha). in peacetime, mha is the principal policy-making governmental body for safety and security in singapore. in the event of a national disaster, the mha leads at the strategic level of incident management. the incident management system in singapore is known as the home front crisis management system (hcms). under the hcms, the scdf is appointed as the incident manager, taking charge of managing the consequences of disasters and civil emergencies. reporting to the hcmc is an executive group known as the home front crisis executive group (hceg), which is chaired by the permanent secretary for mha. the hceg is in charge of planning and managing all types of disasters in singapore. within the operation allayer, there are various functional inter-agency crisis management groups with specific responsibilities, integrated by the various governmental crisis-management units. at the tactical layer, there are the crisis and incident managers who supervise service delivery and coordination. the singapore government holds relevant ministries accountable in accordance to the nature and scope of the disaster. among those ministries and government agencies, the scdf is the major player in risk mitigation and management for civil emergencies. now, let us look into the scdf in more detail. for civil security and civil incidents, the singapore civil defense force (scdf) 2 is singapore's leading operational authority-the incident manager for the management of civil emergencies. the scdf is responsible for leading and coordinating the multi-agency response under the home front crisis management committee. the scdf operates a three-tier command structure, with headquarters (hq) scdf at the apex commanding four land divisions. these divisions are supported by a network of fire stations and fire posts strategically located around the island. the scdf also serves the following pivotal functions. the scdf provides effective 24-h fire fighting, rescue and emergency ambulance services. the scdf developed the operations civil emergency (ops ce) plan-a national contingency plan. when ops ce is activated, the scdf is vested with the authority to direct all response forces under a unified command structure, thus enabling all required resources to be pooled. however, the wog-irm policy framework only came to existence when singapore encountered sars. the sars epidemic in 2003 was an institutional watershed for singapore's approach to risk mitigation and disaster management (pereira 2008) . prior to the sars epidemic, singapore's executive group 3 mainly focused on crises or disasters that were civil defense in nature. these emergencies were merely conceived to be well managed by a solitary incident manager, supported by other relevant agencies. a specific multi-sectoral governance structure was not considered necessary to handle the crisis. the sars epidemic challenged the prevailing home front crisis management structure as the epidemic transcended just managing civil defense incidents. the policymakers realized the necessity to adopt a comprehensive disaster management framework, an all-hazard approach that includes a mechanism for seamless integration at both the strategic and operational levels among various government agencies. to this end, singapore revamped its home front crisis management framework to produce the current inter-agency structure. the main legislation supporting emergency preparedness and disaster management activities in singapore are the civil defense act of 1986, the fire safety act of 1993, and the civil defense shelter act of 1997. the civil defense act provides the legal framework for, amongst other things, the declaration of a state of emergency and the mobilization and deployment of operationally-ready national service rescuers. provides the legal framework to impose fire safety requirements on commercial and industrial premises, as well as the involvement of the management and owners of such premises in emergency preparedness against fires; and the civil defense shelter act provides the legal framework for buildings to be provided with civil defense shelters for use by persons to take refuge during a state of emergency. to tackle disease outbreak, singapore had earlier promulgated the infectious disease act in 1977. this legislation is jointly administered by the moh and the national environment agency (nea). unlike most governments that make regular national budgetary provision for potential disaster relief and early recovery purposes, the government of singapore makes no annual budgetary allocations for disaster response because the risks of a disaster are low (global facility for disaster reduction and recovery 2011, p.24). however, the singapore government can swiftly activate the budgetary mechanisms or funding lines in the event of a disaster and ensure these lines are sufficiently resourced with adequate financial capacity. to illuminate singapore's approach to disaster management, we now use a case study of singapore's fight against sars to highlight policy learning and lessondrawing in a specific context and timeframe. this case study has three sections. we first introduce the epidemiology of sars in singapore. in the second section, we describe the impact caused by sars epidemics on singapore in the economic, healthcare, and psychosocial aspects. in the third section, we demonstrate singapore's risk mitigating management, and detail the government's risk mitigating measures to contain the epidemic. singapore is a small open economy. external shocks can result in high levels of volatility resonating across the domestic economy. these shocks in turn would bring about higher levels of risk and uncertainty in singapore. at the beginning of 2003, singapore's economic outlook was clouded by the iraq war and its impact on oil prices (attorney-general's chambers 2003). the unexpected outbreak of sars led to greater uncertainty in the singapore economy. singapore's financial markets were severely affected due to the loss of public confidence and reduced floor trading. the impact of sars on the stock market reflected in the straits times index (sti) (see fig. 15 .1). the market did not react well to the sars epidemic. in the first fortnight of the epidemic, the sti closed down 76 points. even though more cases were reported, the sti climbed progressively up 86 points over the next fortnight, eclipsing the earlier falls. this could be attributed to the strict measures which the singapore government introduced. the sti remained relatively stable over the immediate fortnight as new cases were reported. however, it started a downward plunge over the following fortnight as the number of cases peaked once more. the sti plunged 96 points. however, the resilience of the sti was shown when it climbed back up, surpassing the level reported at the beginning of the sars period. the volatility of the sti demonstrates the vulnerability of a small open economy from exogenous forces-in this case, the sars epidemic. sars was the one single activity which contributed to the volatility of singapore's gross domestic product (gdp) in 2003. the ministry of trade and industry (mti) revised the forecast for singapore's annual gdp growth down from 3 to 0.5 %. this forecast was later revised upwards to 2.5 %. there were a number of channels by which the sars epidemic affected the economy. the economic impacts will be discussed from the positions of demand and supply shocks. the main economic impact of the sars outbreak was on the demand side, as consumption and the demand for services declined (henderson 2003) . the economic consequence caused fear and anxiety among singaporeans and potential tourists to singapore. the hardest and most directly hit were the tourism, retail, hospitality and transportrelated industries, for example airline, cruise, hotel, restaurant, travel agent, retail and taxi services, and their auxiliary industries (see fig. 15 .3 and this had a direct impact on hotel occupancy rates, which declined sharply to 30 % in late april 2003. cancellation or postponement of tourism events increased by about 30-40 %. revenues of restaurants dropped by 50 % while revenues of the travel agents decreased by 70 %. sars had an uneven impact on various sectors of the economy. a four-tiered framework to assess the impact on the respective sectors showed that tier 1 industries, such as the tourism and travel-related industries were most severely hit. tier 1 industries account for 3.5 % of gdp. the tier 2 industries, such as restaurants, retail and land transport industries were significantly hit, which account for 7.5 % of gdp. the next two tiers were less directly affected by the sars outbreak. tier 3 industries include real estate and stock broking, which account for close to 19 % of gdp. the remaining 70 % of the domestic economy in tier 4 includes manufacturing, construction and communications. these industries were not directly impacted by the outbreak of sars. all in all, the estimated decline in gdp directly from sars was 1 %, equaling sgd875 million. singapore experienced a significant drop in tourist arrivals where visitors usually stay for up to 3 days and transit onto their next destination. the trend for visitor inflow is that visitor inflows fall sharply. this is especially true in the case of singapore, when visitor stays tend to be shorter and the high-end visitors stayed away. as a result, tourism and other related industries were nearly crippled due to a significant reduction in both leisure and business travel. visitors from around the world cancelled or postponed their trips to singapore, causing a drastic decrease of total expenditure from visitors. (see table 15 .2) plummeting visitor arrivals directly impacted hotel occupancy rates, which declined sharply to 30 % in late april (see table 15 .3). the hotel occupancy rate plummeted from 72 to 42 %, compared to the normal level of 70 % or above. the annual averages for hotel occupancy rates were 74.4 % in 2002, 67.3 % in 2003, and 80.6 % in 2004 . singapore's national carrier, singapore airlines (sia), faced a record-breaking low passenger capacity of 29 % in april and may 2003. sia cancelled approximately 30 % of its weekly schedules (henderson 2003) . sia laid off 414 employees, of which 129 were ground staff, as a consequence of a usd200 million loss in june 2003. the hospitality industry had to resort to cutting budgets, which led to a steep plunge in the number of employed in the service sector. out of a total of 12,100 made unemployed, hotels and restaurants went through the biggest cut, that being 5,800 employees. the breakdown of total job losses showed 47 % in the service sector, 28 % in construction, and 25 % in manufacturing. additionally, transactions in the retail sector were dropped by 50 %. the private property volume transactions for condominiums and private property price index are also good proxies on the impact of the economy from sars. based on quarterly figures between 2002 and 2004, the volume transactions dipped to a low in the first quarter of 2003. also, there was a corresponding decline in the price index. transactions recovered steadily by the third quarter boosted by confidence in market sentiments (see fig. 15.3) . the sti and private property price index seemed to display fairly similar trends, albeit with some observed lag. note also that there is a lagged effect of consumer's deferred purchases after the outbreak of sars in singapore. demand creates its own supply. therefore, a fall in demand of goods and services is likely to bring about a fall in the supply of such goods and services. also, the loss of consumer and business confidence would reduce the level of aggregate demand. these effects were observed as the manufacturing industry experienced supply chain disruptions as the singaporean economy and employment market continued to weaken. singapore was taken off the who's list of sars affected countries on 31st may 2003-one of the first countries to be removed from the list. with the "fear-factor" managed, normal daily activities slowly resumed. sars affected industries and sectors started to show signs of recovery towards the end of the second quarter in 2003. a more comprehensive analysis of the economic costs of sars will need to consider the direct impact on consumer spending and indirect repercussions of the shock on trade and investment (asian development bank outlook 2003). the economic costs from a global disease, such as sars, go beyond the immediate impacts incurred in the affected sectors of disease-inflicted countries. this is not just because the disease spreads quickly across countries through networks related to global travel, but also because any economic shocks to one country spread quickly to other countries through the increased trade and financial linkages associated with globalization. however, just calculating the number of cancelled tourist trips, the declines in retail trade, and some of the factors discussed earlier do not provide a complete picture of the impact of sars. this is because there are close linkages within economies, across sectors, and across economies in both international trade and international capital flows. thus, analyzing the tourism sector alone may not be sufficient in analyzing the overall financial impact of sars. sars inflicted a heavy toll on businesses and immediately impacted severely the viability of business. businesses lost employees for long periods of time due to factors such as illness, the need to care for family members and fear of infection at work, or retrenchment. as the workforce shrunk due to absenteeism, business operations, for example supply chain, flow of goods worldwide and provision of services, were all affected both locally and internationally. in terms of retrenchment, the job prospects of employees in affected companies appeared miserable. a survey performed during the sars period showed that the jobless rate increased more than 5.5 %, the highest for the last decade in singapore (ministry of manpower, singapore 2003) . in absolute numbers, overall employment diminished by 25,963 in the second quarter of 2003, the largest quarterly decline since the mid-1980s recession. unlike previous retrenchment that affected mainly blue-collar labor, sars also affected whitecollar employees too. the implementation of workplace sars control measures added to operational and administrative costs. for example, the policy of temperature taking was implemented at workplaces in the private sector. numerous private establishments installed thermal-scanners in their entrances from day one. however, such precautionary measures were necessary to contain the disease. this helped to restore business confidence and investment potential (a lower level of investments will lead to slower capital growth). but the reduction in an economy's capacity may linger on for a few quarters before it is restored to pre-sars levels. the loss of productive working days from quarantine, and implementation costs incurred to monitor movements of employees contributed to the reduction in the aggregate supply front. some of these economic effects may have worsened the public health situation if strategic planning was not in place. sars reduced levels of service and care in singapore's healthcare system as the system mobilized its medical resources to deal with the sars epidemic. the influx of influenza patients to hospitals and clinics crowded out many other patients with less urgent medical problems for treatment. this particularly affected those seeking elective operations that had to be postponed until the epidemic ended in singapore. sars also severely impacted singapore's healthcare manpower. during the peak of sars from mid-march 2003 to early april 2003, there was a shortage of medical and nursing professionals because (1) the demand for care of influenza patients substantially increased, and (2) the supply of healthcare manpower decreased as some were also affected by the epidemic. like other business sectors, hospitals, clinics and other public health providers also faced a high staff-absenteeism rate and encountered difficulties in maintaining normal operations. this resulted in a further reduction in the level of service capacity. psychosocial impact from sars was mainly caused by limited medical knowledge of sars when it began its insidious spread in singapore. such uncertainty of contracting a highly contagious disease actually deteriorated the fear of security breaches, and the panic of overexposure (tan 2006) . responding to the uncertainty of disease transmission, the singapore government instituted many draconian public policies, such as social distancing, quarantine and isolation, as risk mitigating measures. all of these control measures created an instinctive withdrawal from society for the general population. this brought about a behavior which resulted in the public avoiding crowds and public places with human interaction. on 24 march 2003, the moh invoked the infectious disease act (ida) to isolate all those who had been exposed to sars patients. after ida was invoked, on 25 march 2003, schools and non-essential public places were closed. public events were cancelled to prevent close contact in crowds. singaporeans with contact history were asked to stay home for a period of time to prevent transmission. harsh penalties, such as hefty fines of more than usd4,000 or imprisonment, were imposed on those who defied quarantine orders. in a drastic move reminiscent of a police state, closedcircuit cameras were installed in the houses of those ordered to stay home to monitor their compliance with the quarantine order (abc news online 2003). at the height of sars, 12,194 suspected cases were ordered to stay home, all of whom were monitored either by cameras or in less severe cases, by telephone calls. quarantine, regardless of its effectiveness, received strong criticism from the general public during the outbreak of sars due to the invasive nature of that measure (duncanson 2003) . impact of social distancing remains unclear, but who has recommended such control measures depending on the severity of the epidemic, risk groups affected and epidemiology of transmission (world health organization 2005). singapore's moh advocated the practice of social distancing during the outbreak of sars. the sole intention of social distancing was to limit physical interactions and close contact in public areas to slow the rate of disease transmission. additionally, social distancing measures in particular have a psychological impact. the practice of social distancing led to a social setback in businesses that suffered economic losses as a result (duncanson 2003) . the psychological impact of sars is longer lasting. the most immediate and tragic impact was the loss of loved ones. in this section, we detail singapore's command structure, legal framework in fighting sars, as well as risk mitigating measures in economic, healthcare, and psychosocial perspectives. one of the most important lessons the singapore government learned from the sars epidemic was the crucial role played by the bureaucracy in disaster management. the bureaucratic structure in place then was severely inadequate in terms of handling a situation that was both fluid and unprecedented; indeed, fighting sars required more than a medical approach because resources had to be drawn from agencies other than the moh. accordingly, a three-tiered national control structure was created in response to sars-these tiers were individually represented by the inter-ministerial committee (imc), the core executive group (ceg) and the inter-ministry sars operations committee (imoc) (tay and mui 2004) . the nine-member imc was chaired by the minister of home affairs (mha) and it fulfilled three major functions: (1) to develop strategic decisions, (2) to approve these major decisions, and (3) to implement control measures. 4 notably, the imc also played the role of an interagency coordinator overseeing the activities of other ministries and their subsidiaries. on 7 april 2003 (5 weeks after the first case of sars was reported), the ceg and a ministerial committee was formed. the ceg was chaired by the permanent secretary of home affairs and consisted of elements from three other ministries: the moh, the ministry of defense (mod) and the ministry of foreign affairs (mfa). in particular, the role of the ceg was to manage the sars epidemic by directing valuable resources to key areas. the imoc, meanwhile, was seminal in carrying out health control measures issued by the imc (see fig. 15 .4 below). the moh, at the operational layer, formed an operations group responsible for the planning and coordination of health services, and operation in peacetime. during sars, it commanded and controlled all medical resources and served as the main operational linkage between the moh and all the healthcare providers. on 15 march 2003, when the epidemiological nature of sars was still unclear, the moh initiated a sars taskforce to look into the mysterious strain. only 2 days later, after more sars cases were reported and a better epidemiological understanding of the strain was developed, the singapore government swiftly declared sars a notifiable disease under the infectious disease act (ida) (ministry of health, singapore 2003a) . in the case of a broad outbreak, ida made it legally permissible to enforce mandatory health examination and treatment, exchange of medical information and cooperation between healthcare providers and the moh, and the quarantine and isolation of sars patients (infectious disease act 2003). in particular, the government amended the ida on 24 april 2003 requiring all those who had come into contact with sars patients to remain indoors or report immediately to designated medical institutions for quarantine (ministry of health, singapore 2003b) . asa legacy of singapore's british colonial past, the singapore legislature is unique and well-known for passing laws in a swift and efficient manner. the uniqueness in singapore's legal framework allows singapore to tan (2006) swiftly amend the ida during health crises to suit volatile conditions, for instance when more epidemiological cases were uncovered and the virus was better understood. all in all, the ida played an adaptive role in terms of facilitating a swift response to the outbreak of this particular epidemic. on 22 march 2003, the ceg designated the restructured public hospital-tan tock seng hospital (ttsh) as the sars hospital (james et al. 2006; tan 2006) . that is, once a suspected sars patient was detected at a local clinic or emergency department, he or she would then be transferred to ttsh immediately for further evaluation and monitoring. the national healthcare system prioritized life-saving resources such as medicine and medical equipment to allocate manpower and protective equipment to the ttsh. to ease the flu-like patient influx into the ttsh, the government diverted non-flu patients away from ttsh so that the sudden surge in the number of flu cases at ttsh did not paralyze its service delivery. the full impact of sars on the economy by and large depended on how quickly sars was contained, as well as the course of the sars outbreak in the region and beyond. to mitigate sars impact on singapore's economy, the government took every precaution and spared no effort to contain the sars outbreak in singapore. two aspects of sars warranted government intervention to mitigate economic impact. first, the information that needs to be collected and disseminated to effectively assess sars displays the characteristics of public good. second, there are negative externalities related to contagious diseases in the sense that they affect third parties in market transactions. public good and negative externalities are typical areas where government action is needed (fan 2003) . there are three major factors which can explain why some economies are more vulnerable and susceptible to the effect of sars than others (asian development bank outlook 2003) . these factors are structural issues (e.g. shares of tourism in gdp and the composition of consumer spending), initial consumer sentiments, and government responses. as the research shows, the singapore government implemented a usd 132 million (sgd 231 million in 2003) sars relief package to reduce the costs for tourism operators and its auxiliary services. on the other hand, an economic relief package worth usd 131m (sgd 230m) was created to aid businesses hit by sars. 5 in addition, the government incurred usd$109m (sgd 192m) in direct operating expenditure related to sars, and committed another usd 60m (sgd105m) development expenditure of hospitals for additional isolation rooms and medical facilities to treat sars and other infectious diseases. the government's economic incentives worked when seeking cooperation of other healthcare providers (such as public hospitals and local clinics) so that they would absorb additional cases of non-flu illnesses. to help sars affected firms tide over the plight and minimize job losses, singapore's national wage council widely consulted the private sector, and recommended sars-struck companies adopt temporary cost-cutting measures to save jobs. 6 the measures adopted by the private sector included the implementation of a shorter working-week, temporary lay-offs and the arrangement for workers to take leave or undergo skills training and upgrading provided by the ministry of manpower and associated agencies. when these measures failed to preserve jobs, the last resort was temporary wage cuts. surveillance and reporting is critical in combating pandemics because it serves to provide early warning and even detection of impending outbreaks. the surveillance process involves looking out for possible virulent strains and disease patterns within a country's borders as well as at major border-crossings (jebara 2004; ansell et al. 2010; narain and bhatia 2010) . when sars first surfaced, the nature of this virus was largely unknown. as a consequence, health authorities worldwide were mostly unable to detect and monitor suspected cases. health authorities in singapore encountered this same problem. but with the aid of who technical advisors, singapore managed to establish in a timely manner identification and reporting procedures. furthermore, the moh also expanded the who's definitions for suspected cases of sars (to include any healthcare workers with fever and/or respiratory symptoms) in order to widen the surveillance net (goh et al. 2006) . as the pace of sars transmission quickened, the singapore parliament amended the ida on 25 april 2003 requiring all suspected sars cases to be reported to the moh within 24 h from the time of diagnosis. although these control measures were laudable, sars also exposed the weaknesses of singapore's fragmented epidemiological surveillance and reporting systems (goh et al. 2006) . as a major part of lesson-drawing in the post-sars era, a number of novel surveillance measures were introduced to integrate epidemiological data and to identify the emergence of a new virulent strain faster. one of the most notable was the establishment of an infectious disease alert and clinical database system to integrate critical clinical, laboratory and contact tracing information. today, the surveillance system has four major operational components that include community surveillance, laboratory surveillance, veterinary surveillance, external surveillance, and hospital surveillance. to limit the risk of transmission in healthcare institutions once the sars epidemic had broken out, the moh implemented a series of stringent infection-control measures that all healthcare workers (hcws) and visitors to hospitals visitors had to adhere to. the use of personal protective equipment (ppe) 7 was made compulsory. visitors to public hospitals were barred from those areas where transmission and contraction were most likely. the movements of hcws in public hospitals were also heavily proscribed. unfortunately, except for ttsh, these critical measures were not enforced in all healthcare sectors until 8 april 2003, and this oversight resulted in a number of intra-hospital infections (goh et al. 2006 ). in addition, the policy of restricting the movements of hcws and visitors to hospitals was taken further. more specifically, their movements between hospitals were now restricted. patient movement between hospitals, meanwhile, was strictly restricted to medical transfers. the number of visitors to hospitals was also limited and their particulars recorded during each visit. it is also important to point out that these somewhat draconian control measures required strong public support and cooperation. indeed, their implementation would not have been successful had these two elements been missing. public education and communication are two indispensable components in health crisis management (reynolds and seeger 2005; reddy et al. 2009 ). communication difficulties are prone to complicate the challenge, especially when there is no established, high-status organization that can act as a hub for information collation and dissemination. therefore, it is necessary to disseminate essential information to the targeted population in a transparent manner. during the sars outbreak, the moh practiced a high degree of transparency when it shared information with the public. indeed, the clear and distinct messages from the moh contributed significantly to lowering the risk of public panic. the moh worked closely with the media to provide regular, timely updates and health advisories. this information was communicated to the public through every possible medium. in addition to the media (e.g. tv and radio), information pamphlets were distributed to every household and the moh website provided constant updates and health advisories to the general public. notably, a government information channel dedicated to providing timely updates was created on the same day-13 march 2003-when the who issued a global alert. a dedicated tv channel called the sars channel was launched to broadcast information on the symptoms and transmission mechanisms of the virus (james et al. 2006) . the importance of social responsibility and personal hygiene was a frequent message heard throughout the sars epidemic. as an example, when tan tock seng hospital was designated as the sars hospital at the peak of sars epidemics, the government undertook many efforts in public communication and education to seek cooperation and support from other healthcare providers, such as public hospitals and local clinics, so that they would absorb the additional cases of non-flu illnesses. many organizations displayed prominent signs in front of their building entrances that reminded their staff as well as visitors to be socially responsible. school children were instructed to wash their hands and take their body temperature regularly. the public was told to wear masks and postpone non-essential travel to other countries. the moh advocated the practice of social distancing during the outbreak of sars. the sole intention of social distancing was of course to limit physical interactions and close contact in public areas thereby slowing the rate of transmission. as a result, all pre-school centers, after-school centers, primary and secondary schools, and junior colleges were closed from 27 march to 6 april 2003. school children who had stricken siblings were advised to stay home for at least 10 days. moreover, students who showed flu-like symptoms or had travelled to other affected countries were automatically granted a 7-day leave of absence and home-based learning program were instituted for those affected. extracurricular activities were also scaled down to minimize social contact. meanwhile, the moh also advised businesses to adopt social distancing measures such as allowing staff to work from home and using split-team arrangements. those who were most at higher risk of developing complications if stricken were moved and removed from frontline work to other areas where they were less likely to contract the virus. as mentioned earlier, the practice of social distancing also drew strong criticisms from those businesses that suffered economic losses as a result. apart from providing economic compensation, measures to mitigate psychosocial impacts are also important. the government's measures of public health control, as mentioned above, drew strong criticisms from businesses and the public during the outbreak of sars due to the invasive nature of those actions. besides these, the economic slowdown affected overall employment and personal income. some households required financial assistance. in response to the public complaints, authorities in singapore provided economic assistance to those individuals and businesses who had been affected by home quarantine orders through a "home quarantine order allowance scheme" (tay and mui 2004; teo et al. 2005) . at the same time, the moh worked together with various ministerial authorities to provide essential social services to those affected by the quarantine order. for example, housing was offered to those who were unable to stay in their own homes (because of the presence of family members) during their quarantine, ambulance services were freely provided by the singapore civil defense force to those undergoing quarantine at home to visit their doctors, as well as high-tech communication gadgets such as webcams, for those undergoing quarantine to stay in touch with relatives and friends. impacts on social welfare in large part relate to economic outlook, especially in the area of consumption patterns. all these risk mitigating measures were not only effective in containing the epidemic, but also valid for implications in disaster risk management. in this section, we draw on the lesson-learning from singapore's experience in fighting the sars epidemic, and discuss implications for future practice and research in disaster risk management. the implications are explained in four aspects: staying vigilant at the community level, remaining flexible in a national command structure, demand for surge capacity, and collaborative governance at regional level. it remains questionable that singapore's draconian health control measures may not be applicable or replicable in other countries, for example setting a camera to monitor the public's compliance during home quarantine. the evidence suggests that draconian government measures, such as quarantine and travel restrictions, are less effective than voluntary measures (such as good personal hygiene and voluntarily wearing of respiratory masks), especially over the long term. however, reminding the public to maintain a high level of vigilance and advocate individual social responsibility can be a persuasion tactic by an authority to influence and pressure, but not to force individuals or groups into complicity with a policy. therefore, promoting social responsibility is crucial in terms of slowing the pace of infection through good personal hygiene and respiratory etiquette in all settings. to achieve this goal, public education and risk communication are two indispensable components in health crisis management (reddy et al. 2009; reynolds and seeger 2005) . the community must be aware of the nature and scope of disasters. they have to be educated on the importance of emergency preparedness and involvement in exercises, training and physical preparations. at the community level, institutions and capacities are developed and strengthened which in turn systematically contribute to vigilance against potential risks. this is best illustrated in the singapore government's communication strategy to manage public fear and panic during the outbreak of sars (menon and goh 2005) . throughout the epidemic, the singapore government relentlessly raised the level of vigilance of personal hygiene and awareness of social responsibility. this, in large part, has to rely on public education and risk communication. to effectively disseminate the idea of vigilance across the public, political leaders were seen as doing and initiating a series of countermeasures to reassure the public. by showing the people that government leaders practiced what they preached, the examples served to naturalize and legitimize the public discourse of social responsibility for all singaporean citizens (lai 2010) . the need to stay vigilant is never overemphasized, but being vigilant does not equate to a panacea that ensures all government agencies work together. to be well prepared for the unexpected, we need a clear and swift national command structure that can flexibly respond to, and even more promptly than in the case of disease transmission, the changing situation. all local agencies responding to an emergency must work within a unified national command structure to coordinate multi-agency efforts in emergency response and management of disasters. on top of facilitating close inter-agency coordination, the strength of this flexible structure is in its ability to ensure a swift response to an epidemic outbreak by implementing risk mitigating measures more effectively and efficiently. structural flexibility involves swift deployment of forces to mitigate the incident at the tactical level, and to provide expert advice at the operational level, in order to minimize damage to lives and property. among other things, the flexibility endemic to this command structure facilitates the building of trust between the state and its people (lai 2009 ). this in turn ensures that government measures are quickly accepted by the general public. as shown in this chapter, the moh has been entrusted by the singapore government and pre-designated to be the incident manager for public health emergencies. when a sudden incident involves public health or the loss of lives on a large scale, the moh is responsible for planning, coordinating and implementing an assortment of disease control programs and activities. during the outbreak of sars, the singapore government established a national command and control structure that was able to adapt to rapidly changing circumstances that stemmed from the outbreak. specifically, the moh set up a taskforce within that ministry even when the definition of sars remained unclear. as more sars cases were uncovered and better epidemiological information became available, the government quickly created the inter-ministerial committee (imc) and core executive group (ceg)-both of which were instrumental in the design and implementation of all risk mitigating measures-to coordinate the operation to combat the outbreak (pereira 2008) . while this overarching governance structure is more or less standard worldwide ('t hart et al. 1993; laporte 2007) , the case of singapore is unique in that the city-state was able to overcome bureaucratic inertia and adapt this governance structure. from singapore's experiences during the sars crisis, we have learnt that the strength of a national command structure lies in its flexibility to link relevant ministries on the same platform. these linkages ensure a timely, coordinated response and service delivery. having a flexible structure was not the only reason behind the successful defeat of sars. in singapore's case, we also notice the success of containing an uncertain, high-impact disaster has to rely on surge capacity. in the context of this paper, surge capacity refers to the ability to mobilize resources (such as ppes, vaccines and hcws) to combat the outbreak of a pandemic. singapore's response to sars in 2003 illustrates the importance of being able to increase surge capacity swiftly to deal with an infectious disease outbreak. in the asia pacific region, this problem continues to hamper many countries' ability to combat infectious diseases (putthasri et al. 2009 ). for many public health organizations in asia, it is a matter of fact that they are unable to deal with pandemics because the resources to do so are simply absent (balkhy 2008; hanvoravongchai et al. 2010; lai 2012b; oshitani et al. 2008) . meanwhile, there are evidences which suggest that surge capacity alone is not the full answer. for example, during the sars outbreak, abundant resources contribute an important but not all-encompassing element in the fight against these pandemics. as it turned out, when different stakeholders brought to the task-at-hand their unique skill sets and resources, they actually complicated the fight due to their lack of synergy. in fact, abundant resources without synergy might even undermine collaborative efforts. therefore, it is essential that the ability to link up various stakeholders must be complemented by some type of synergy between them. such ability can be enhanced through close collaboration. this brings us to the third implication for disaster management: collaborative governance at regional level. the trans-boundary nature of the disasters calls for a planned and coordinated approach towards disaster response for efficient rescue and relief operations lai 2012a) . combating epidemics requires multiple states and government agencies to work together in close (webby and webster 2003) . therefore, it is clear that collaborative capacity of various stakeholders is central to the fight against transboundary communicable diseases (lai 2011; lai 2012b; leung and nicoll 2010; voo and capps 2010) . while member states that are of advanced economic development typically lead such efforts, the inclusion of other developing countries, non-traditional agencies, and organizations (including non-governmental ones) is necessary and ultimately, inevitable. indeed, major countermeasures such as border control and surveillance are often made possible with the aid of regional collaboration. take the association of southeast asian nations (asean) as an example. asean countries take regional, national and sub-national approaches to disaster risk management ). the asean committee on disaster risk management (acdm) was established in 2003 and tasked with the coordination and implementation of regional activities on disaster management. the committee has cooperated with united nations bodies such as the united nations international strategy for disaster reduction (unisdr) and the united nations office for the coordination of humanitarian affairs (unocha). the asean agreement on disaster management and emergency response (aadmer) provides a comprehensive regional framework to strengthen preventive, monitoring and mitigation measures to reduce disaster losses in the region. in recent years, singapore has been active in providing training and education for disaster managers from neighboring countries. singapore has an ongoing exchange program with a number of asia pacific nations and europe. for example, to partner with apec to increase emergency preparedness in the asia-pacific region, singapore's scdf provides shortterm courses on disaster management in the civil defense academy (asia pacific economic cooperation 2011). the world today is far more inter-connected than ever before. international travel, transnational trade, and cross-border migration have drastically increased as a consequence of globalization. no country is spared from being influenced directly or indirectly by disasters. singapore is no exception. singapore is vulnerable to both natural and man-made disasters alongside its remarkable economic growth. in response, the singapore government adopts an approach of whole-of-government integrated risk management, a concerted, coordinated effort based on a total national response. we have witnessed in the case study singapore's all-hazard management framework with specific references to the sars epidemic. in fighting sars, singapore's health authority was responsive enough to swing into action when they realized that the existing bureaucratic structure was inadequate in terms of facilitating close cooperation between various key government agencies to tackle the health crisis on hand. therefore, a command structure was swiftly established. the presence of a flexible command structure, the way and the extent it was utilized, explains how well an epidemic was successfully contained. flexibility actually enhanced organizational capacities by making organizations more efficient under certain conditions. epidemic control measures such as surveillance, social distancing, and quarantine require widespread support from the general public for them to be effective. singapore's experiences with sars strongly suggest that risk mitigating measures can be effective only when a range of partners and stakeholders (such as government ministries, non-profit organizations, and grass-roots communities) become adequately involved. this is also critical to disaster risk management. whether all of these aspects are transferrable elsewhere needs to be assessed in future research. nonetheless, this unique discipline certainly has helped singapore come out of public health crises on a regular basis. singapore's response to the outbreak of sars offers valuable insights into the kinds approaches needed to combat future pandemics, especially in southeast asia. singapore imposes quarantine to stop sars spreading. abc news managing transboundary crises: identifying the building blocks of an effective response system apec partners with singapore on disaster management hfa implementation review for acdr 2010 asian development outlook 2003 update accessed 29 adrc country report impact of sars on the economy, singapore government avian influenza: the tip of the iceberg severe acute respiratory syndrome -singapore how singapore avoided who advisory, toronto star impact to lung health of haze from forest fires: the singapore experience sars: economic impacts and implications, erd policy brief no. 15. manila: asia development bank advancing disaster risk financing and insurance in asean countries: framework and options for implementation, global facility for disaster reduction and recovery a new world now after hotel collapse, the straits times epidemiology and control of sars in singapore pandemic influenza preparedness and health systems challenges in asia: results from rapid analyses in 6 asian countries crisis decision making: the centralization thesis revisited managing a health-related crisis: sars in singapore singapore government. agc public health measures implemented during the sars outbreak in singapore surveillance, detection, and response: managing emerging diseases at national and international levels shaping the crisis perception of decision makers and its application of singapore's voluntary contribution to post-tsunami reconstruction efforts shaping the crisis perception of decision makers and its application of singapore's voluntary contribution to post-tsunami reconstruction efforts organizational collaborative capacities in post disaster society organizational collaborative capacity in fighting pandemic crises: a literature review from the public management perspective toward a collaborative cross-border disaster management: a comparative analysis of voluntary organizations in taiwan and singapore a proposed asean disaster response, training and logistic centre: enhancing regional governance in disaster management combating sars and h1n1: insights and lessons from singapore's public health control measures critical infrastructure in the face of a predatory future: preparing for untoward surprise reflections on pandemic (h1n1) 2009 and the international response managerial strategies and behavior in networks: a model with evidence from u.s. public education transparency and trust: risk communications and the singapore experience in managing sars daily distribution of sars cases statistics the explosion and fire on board s.t. spyros manpower research and statistics, singapore government the challenge of communicable diseases in the who south-east asia region major issues and challenges of influenza pandemic preparedness in developing countries crisis management in the homefront, presentation at network government and homeland security workshop capacity of thailand to contain an emerging influenza pandemic challenges to effective crisis management: using information and communication technologies to coordinate emergency medical services and emergency department teams crisis and emergency risk communication as an integrative model collaboration in the fight against infectious diseases economic survey of singapore singapore's efforts in transboundary haze prevention singapore real estate, and property price annual report on tourism statistics annual report on tourism statistics annual report on tourism statistics singapore floods sars in singapore -key lessons from an epidemic an architecture for network centric operations in unconventional crisis: lessons learnt from singapore's sars experience. california: thesis of naval postgraduate school sars in singapore: surveillance strategies in a globalizing city influenza pandemic and the duties of healthcare professionals are we ready for pandemic influenza who guidelines on the use of vaccines and antivirals during influenza pandemics. geneva: word health organization acknowledgement the authors would like to thank the economic research institute for asean and east asia (eria) to initiate this meaningful research project, and four commentators-professor yasuyuki sawada (tokyo university), professor chan ngaiweng (university sains malaysia), dr. sothea oum (eria), mr. zhou yansheng (scdf) and all participants in eria's two workshops, for their insightful comments for an earlier draft of this chapter. key: cord-257467-b8o5ghvi authors: smith, barbara a. title: anesthesia as a risk for health care acquired infections date: 2010-12-31 journal: perioperative nursing clinics doi: 10.1016/j.cpen.2010.07.005 sha: doc_id: 257467 cord_uid: b8o5ghvi anesthesia is delivered in a variety of modalities including general, regional, or local. patients are most vulnerable when receiving anesthesia, as they must depend on the anesthesia team to provide this care without untoward effects. it is expected that patients will be protected from health care acquired infections (hais) by appropriate use of infection prevention measures. in addition, the anesthesia team may be at risk of hais because of their intimate contact with the patient's blood and respiratory system. adequate adherence to infection prevention methods should reduce the risk of occupation exposure and infection to the anesthesia team members. health care associated infections involving anesthesia have been transmitted from health care worker to patient, patient to patient, and patient to the anesthesia provider. this article further discusses the risks for hais apparent in intravascular cannulation, endotracheal intubation, and the development of surgical site infections, and examines occupational measures to prevent infections in the health care worker. regardless of the health care setting or the level of provider, the standard of care for infection prevention and managerial oversight of this care should remain the same. of provider, the standard of care for infection prevention and managerial oversight of this care should remain the same. the anesthesia team has direct and indirect responsibility for the prevention of infections, which may manifest most commonly as bloodstream infections, local injection site infections, abscesses, meningitis, respiratory tract infections, and surgical site infections (ssis). bacterial and viral infections may also be attributed to anesthetic care. because anesthesia care interrupts two of the body's significant defense mechanisms, there is the potential for risk of infection to the patient. first, the body's intact skin functions as a barrier to pathogenic organisms. however, intravascular cannulation for conscious sedation disrupts the skin integrity and permits a local or systemic infection to occur. these risks also apply to central venous vascular catheters inserted by the anesthesiologist, whether in the operating room or as part of a critical care team. insertion of a needle, cannula, or implantable device into the spinal column for anesthesia or analgesia disrupts the skin integrity and provides a direct portal of entry for organisms. second, surgery will often be performed with general anesthesia that involves the insertion of an endotracheal tube to maintain an open airway and permit artificial ventilation during the procedure. although endotracheal intubation during surgery is generally a controlled safe procedure, this artificial airway predisposes the body to exposure to respiratory pathogens whether from the health care provider, the environment, or equipment. the same risk arises when intubation is performed during an emergency situation such as cardiopulmonary resuscitation. measures to prevent this exposure, including the role of the environment and equipment, are discussed in this article. third, although not physically involved at the sterile operative field, the anesthesia team can have influence over the development of ssis by their collaboration with the surgical team in achieving normothermia, glycemic control, and appropriate antibiotic prophylaxis for the patient. 6 finally, the author examines occupational measures to prevent infections in the health care worker. access to the spinal column is used to provide regional anesthesia, for example with epidural anesthesia, or to deliver medication such as analgesics or steroids. other examples of procedures that enter the spinal column include diagnostic procedures such as lumbar puncture and myelography. the overall risk of infection from these procedures appears to be low. miller 7 cites a rate of less than 1 per 10,000 cases of serious infection (ie, meningitis or spinal abscess). he notes 2 factors that had a relation to infection: the duration of epidural anesthesia and the patient's medical conditions. in a review conducted through 2005, schulz-stü bner and colleagues 8 noted rates of 3.7 to 7.2 spinal abscesses per 100,000 cases and 0.2 to 83 epidural abscesses per 100,000 procedures. baer 9 lists 179 cases of postdural meningitis occurring after spinal or epidural anesthesia and other types of instrumentation. spinal or epidural anesthesia accounted for 65% of the cases ( table 1) . some investigators have argued that the true incidence is unknown because there is no uniform reporting mechanism in the united states. nonetheless, ruppen and colleagues 10 attempt to define the risk in obstetric epidural anesthesia and cite a rate of 1 deep epidural infection per 145,000 procedures; an admittedly low smith incidence. nonetheless, in an american society of anesthesiology newsletter, hughes 11 urges his obstetric colleagues that they can lower the risk to their patients. evidence of infection transmission is also documented in the descriptions of outbreaks. five women in two separate states (new york and ohio) developed bacterial meningitis after intrapartum anesthesia ( table 2) . 1 the article reveals that the causative organism was recovered from a nasal swab of one of the anesthesiologists linked to the 2 cases in ohio. in each outbreak, unmasked personnel (including the anesthesiologist delivering the spinal anesthesia in ohio) were present in the room during the procedures. eight cases of meningitis following myelography were reported by the centers for disease control and prevention (cdc) that appear related to contamination from mouth flora from the clinicians who performed the procedure. 12 as part of the investigation, the cdc eliminated equipment and fluids as a potential source for these infections, and confirmed that adequate aseptic technique had been followed. as described by baer, 9 the mechanism of infection in these cases occurs through: droplet transmission of aerosolized mouth organisms, contamination from skin bacteria or hematogenous or direct spread from an endogenous site of infection. regarding this first method of transmission, the likelihood is that some of these infections are related to the clinician who performed the procedure. this assumption is supported by arguments that several cases were related to clusters among specific operators, or were linked by dna testing of nasal swabs from the operator and positive cerebrospinal fluid cultures, as in the ohio case cited. these data suggest that the clinician can implement preventive practices related to anesthesia or analgesia given through the spinal cord to reduce the likelihood of infection. these practices include skin disinfection with an appropriate antiseptic, sterile gloves and sterile drapes, and aseptic technique. because of the growing evidence for droplet transmission of oropharyngeal flora during the procedures that puncture the spinal column, the cdc's guidelines for isolation precautions recommend the use of a surgical mask by personnel placing a catheter or injecting material into the spinal canal or subdural space. 12 a recent practice advisory prepared by the american society of anesthesiologists (asa) concurs with the implementation of aseptic technique when handling neuraxial needles and catheters, and states it should include "hand washing, wearing of sterile gloves, wearing of caps, wearing of masks covering both the mouth and nose, use of individual packets of skin preparation, and sterile draping of the patient." the same advisory does not make a specific recommendation regarding the type of skin antisepsis to use. 13 intravascular catheters, including central and peripheral venous catheters and arterial catheters, play an integral part in the delivery of anesthesia or analgesia. once again, these devices provide an opportunity for organisms to enter the normally sterile vasculature. the insertion and care of these catheters should remain the same regardless of whether they are inserted in an operating room, a critical care unit or a free-standing practice. there are 2 key mechanisms by which a catheter can lead to an infection. 14 the first occurs with colonization of the device and is referred to as catheter-associated infection. the pathogenesis of these types of infections is: skin organisms gain entry via the puncture site at time of insertion or shortly thereafter, the catheter hub can become contaminated during use or organisms spread hematogenously from another site of infection in the body. catheter-associated infections can lead to local site infections or systemic infections including bacteremia, sepsis, or endocarditis. the second mechanism occurs with contamination of the medication or substance being injected, which is referred to as infusate-associated infection. although intrinsic contamination of intravenous fluids or medications from the manufacturer is rare in the united states, improper procedures by the anesthetist or technician during medication preparation and administration can led to infusate-associated infections. in this situation the bacteria, virus, or fungus is directly infused into the patient's bloodstream. the rate of catheter-associated infections varies by the type of device used. for peripherally inserted, short-term catheters the risk is low. lee and colleagues 15 report a local site infection rate of 2.1% to 2.6% among 3165 patients with short peripheral intravenous catheters. no patients in this group developed a bloodstream infection. rates of arterial line catheter infections are similarly low, with lucet and colleagues 16 and koh and colleagues 17 reporting rates of 1.0 and 0.92 bacteremias per 1000 catheter days, respectively. these investigators report low rates of bacteremias related to central venous lines as well. the risk of central line-associated bloodstream infections in critical care patients ranges from 1.3 infections per 1000 catheter days in pediatric medical units to 5.5 in burn units. 18 table 3 lists the central line-associated bloodstream infection rates in a sample of units in which the anesthetist may have involvement. although it is possible for individual solutions to become contaminated and subsequently infused into patients, it is difficult to attribute an individual infection to a specific medication, vial, or infusion bag without direct causal evidence. hence, data related to infusate-related infections derives mainly from experiences with outbreaks. for example, blossom and colleagues 19 describe an outbreak of 162 serratia marcescens bacteremias in 9 states related to manufacturer's contamination of prefilled heparin and saline syringes. these circumstances are clearly beyond the control of the anesthesia team, although the team must respond promptly to alerts regarding potential contamination. there are, however, reports of medication contamination occurring under the control of the anesthesia personnel. in 2003, morbidity and mortality weekly report reported hepatitis b and c virus transmission occurring in 3 separate locations. 3 in each of these practices, reuse of needles and syringes and contamination of multidose medication vials led to patient-to-patient transmission of viral infections. as depicted in table 4 , more than 200 people were affected. despite this significant number of patient-to-patient transmissions, practitioners have continued to demonstrate unsafe medication practice. as recently as 2008, 6 patients were infected with hepatitis c due to unsafe injection practice used during sedation for endoscopic procedures. 2 the investigation revealed that the anesthesia provider contaminated vials of propofol by repeated aspiration into the vial with a syringe contaminated with hepatitis c from backflow of the index patient's blood. although the vial was labeled for single-patient use, the practitioner used the vial on the next patients. a graphical example is shown in fig. 1 . the cdc has published extensive guidelines for the prevention of intravascular infections. 20 key elements intended to reduce catheter-associated infections that apply to peripheral, central, or arterial catheters handled by the anesthesia team include: skin disinfection of the intravenous insertion site with an appropriate disinfection (a chlorhexidine-based preparation is preferred), aseptic technique during insertion and care and decontamination of ports and stopcocks with a disinfectant such as 70% alcohol before accessing the device. the cdc guidelines and the asa recommend additional specifications for the insertion and maintenance of central venous catheters because of their higher risk of infection. 21 many facilities have adopted a bundle approach to the insertion of central line catheters both in the operating room and elsewhere. the elements of the bundle are: 1. hand hygiene before insertion 2. full barrier precautions: sterile gown, gloves, masks, and large sterile drapes 3. skin antisepsis with chlorhexidine 4. subclavian vein as preferred anatomic site versus internal jugular or femoral 5. daily review of line necessity. from the anesthetist's perspective, the elimination of infusate-related infections demands preventing contamination of medications and infusions. most practitioners presumably would report adherence to safe medication handling. yet a survey conducted by the american association of nurse anesthetists (aana) revealed that 1% to 3% of clinicians reuse needles or syringes on multiple patients. 22 the aana has joined with the cdc, two state medical societies, the association for practitioners in infection control, and other advocacy groups in the safe injection practices and awareness campaign to further educate health care providers and the public about the importance of these practices. the campaign poster is displayed in fig. 2 . 23 because of the aforementioned outbreaks of hepatitis transmission, the cdc's 2007 guidelines for isolation precautions 12 highlights safe injection practices that are outlined in fig. 3 . the asa supports these practices 21 and makes further recommendations: cleanse rubber septum of vials and the neck of glass ampoules with a disinfectant. medications should be drawn up as close as possible to the time of use. medications in a syringe should be discarded within 24 hours unless specified by the manufacturer or pharmacy. expiration times for medications must be followed, especially the time limits for the use of lipid formulations such as propofol. according to the national health care safety network, the range of postprocedure pneumonias varies greatly by procedure. 18 for example, among those procedures reporting more than 1000 cases, patients undergoing knee prosthesis had a rate of 0.06 postoperative pneumonias per 100 procedures compared with cardiac surgery patients who had a rate of 1.19 pneumonias per 100 procedures. some additional information is gleaned from examining rates of ventilator-associated pneumonias (vap). among surgical type critical care units, the pooled mean rate per 1000 ventilator days was a low of 0.6 in pediatric cardiothoracic units to a high of 8.1 in trauma units. however, it is difficult to distinguish how much of a direct impact intubation and anesthesia had on the development of these pneumonias. one study by rello and colleagues 24 examined the development of pneumonia within the first 48 hours of intubation. eighteen of 250 intensive care unit (icu) patients developed pneumonia within the first 24 hours. there were 65 surgical patients included in this study. the 2 most important risk factors for pneumonia in patients were undergoing cardiopulmonary resuscitation and receiving conscious sedation. the investigators conclude that variables directly related to the intubation had less of an impact on the occurrence of pneumonia. nonetheless, intubation places the patient at risk of infection for several reasons. because intubation interrupts the defense mechanisms of the upper airway, it increases the risk of aspiration. aspiration of oral pharyngeal secretions is a prime cause of health care acquired pneumonia. this condition may be further aggravated by mechanical damage to the larynx or trachea from the endotracheal tube or stylet. 7 furthermore, mechanical ventilation increases the risk of infection. measures to reduce infection risk associated with intubation and mechanical ventilation deal with technique and equipment. cheung and colleagues 25 reviewed the literature to determine the impact of sterile handing of the endotracheal tube and the incidence of pneumonia. of note, the investigators found very few data on the topic yet noted that intubations performed under unsterile conditions do occur, although they do not provide a recommendation. it is prudent for intubation to be performed as aseptically as possible, with personal protective equipment worn for the safety of the health care worker. oral intubation is preferred over nasal intubation because the latter is more likely to lead to sinusitis, thereby increasing the risk of aspiration of infected secretions. 26 care should be taken to drain condensate in the ventilator tubing away from the patient. although there are other measures to reduce vap such as mouth care and semirecumbent position of the patient, these apply after the intraoperative period. equipment utilized by the anesthesia team includes endotracheal tubes, laryngoscope handles and blades, fiberoptic endoscopes, and anesthesia circuits, machines, and carts. there are also ancillary devices used by the team such as pulse oximetry, invasive temperature probes, and airways. this equipment may become contaminated from contact with the patient's skin, blood, secretions, splashes from the operative field, or contact with contaminated hands of the health care worker. the cdc, asa, and aana each have comparable standards for cleaning and disinfection of these items. these standards are based on the spaulding classification method 27 that stratifies items based on their likely contact with a sterile body site, mucus membrane, or intact skin, as noted in table 5 . neither the cdc 12 nor the asa 21 recommends the routine use of a bacterial filter for the breathing circuits or anesthesia ventilators. conversely, the aana states "protective anesthesia as a risk for hai use of bacterial filters is recommended," although they do acknowledge its use is controversial. 28 each of these organizations supports the use of a bacterial filter when caring for an infectious tuberculosis patient. another debated topic is the disinfection of laryngoscope handles. because they do not enter sterile tissue or touch mucous membranes, the spaulding classification would indicate cleaning and low-level disinfection. there is, however, the risk of contamination with body fluids. call and colleagues 29 challenged a common practice of wiping blades with low-level disinfectant between operative cases. after culturing 40 handles that had been cleaned according to the facilities' standard practice, they found 75% had positive bacterial cultures. most importantly, standard protocols should be developed that outline the correct cleaning, disinfection, or sterilization process for each item used by the anesthesia team. the manufacturer of the equipment should be consulted for their recommendations. an oversight mechanism should be included in the policy to ensure adherence with the correct practice. adequate training must be provided. overall, the documented transmission of infection from anesthesia equipment appears to be low. yet loftus and colleagues 30 raise the issue of transmission of bacteria in the anesthesia work area. these investigators cultured 2 specific areas of the anesthesia machine and the sterile stopcock just before the beginning of the case and again at the end of the case. their results showed that 32% of the stopcocks were contaminated by the end of the case. the work area showed a significant increase in bacterial contamination as well. two cases of methicillin-resistant staphylococcus aureus (mrsa) were transmitted to the work area intraoperatively. one case of vancomycin-resistant enterococcus transmission was documented between the anesthesia work are and the stopcock. the investigators also noted a trend toward increased hais among patients with contaminated stopcocks. the machine and stopcocks appear to have become contaminated by contact with providers' hands or lapses in aseptic technique. these results reinforce the need for rigorous attention to hand hygiene not only before the start of surgery but also intraoperatively. most studies indicate that adherence to hand hygiene by health care workers needs to be improved. mcguckin and colleagues 31 report only modest improvement to 51% compliance among non-icu staff after a year-long program of observation and feedback. a 2004 report by pittet and colleagues 32 found a 23% compliance rate among anesthesiologists. hand hygiene is clearly a challenge in the operating room because of the multiple functions being performed. limited access to hand hygiene products within the operating room undoubtedly contributes to poor compliance in the room. koff and colleagues 33 address this latter challenge through a study utilizing a portable device that dispenses alcohol-based hand rub. the device has the added benefit of tracking the frequency of use and providing a reminder if too long a time has elapsed between hand hygiene events. the introduction of the device was considered the study phase. during the study phase, hand hygiene events increased among attending anesthesiologists and other caregivers by 6.9 and 8.3 times per hour, respectively. the investigators also monitored the frequency of stopcock contamination and the occurrence of hais, and demonstrated decreases in both of these indicators as noted in table 6 . while the reduction in hais is promising, koff and colleagues caution that additional research is needed to confirm these results. ssis are the second most frequent hai. 34 control measures for the prevention of ssis include preoperative preparation of the patient, sterile attire and draping, surgical hand preparation, skin antisepsis, air handling, and sterile surgical instrumentation. there are additional conditions that can influence the occurrence of ssis when the anesthesiologist may be involved. one measure aimed at reducing bacteria at the surgical site is the delivery of antibiotic prophylaxis. the national surgical infection prevention project (sip) proposes a 25% reduction in national surgical complication rates by adherence to 3 indicators 35 : 1. administration of an appropriate antibiotic as described by the sip. the antibiotic is selected based on the organisms most likely to cause infection and varies by the type of surgery. 2. timely administration of the antibiotic. to reach adequate blood and tissue concentrations, the antibiotic should be administered within the 60 minutes prior to the surgical incision. (vancomycin may be given up to 120 minutes prior.) 3. discontinuation of prophylactic antibiotics with 24 hours (48 hours for cardiac surgery). a second intervention to reduce ssis is the maintenance of normothermia. hypothermia is thought to contribute to infection because of a decrease in subcutaneous tissue perfusion. 6 lastly, glycemic control has been shown to reduce the rate of infections. anesthesia providers are at risk of occupational infections from direct contact with blood and respiratory secretions. in addition, they may be exposed to microorganisms via the airborne or droplet route. diseases transmitted through the airborne route include tuberculosis, measles, and varicella. most clinicians should be immune to measles and varicella because of effective vaccines. surgery should be delayed for patients with these active infections. if the case cannot be postponed, the air handling in the operating room should ideally have negative pressure relative to the corridor. as previously mentioned, a bacterial filter should be placed on the anesthesia breathing circuit for patients with active tuberculosis. the health care provider should wear an n95 respirator approved by the national institute for occupational safety and health. when called to intubate patients on airborne isolation, again the n95 respirator is indicated. a large number of sars cases in canada were occupationally acquired. fowler and colleagues 5 determined in one small series that physicians and nurses involved in intubation had a relative risk of 3.82 and 13.29, respectively, of developing sars. this result stresses adequate respiratory protection. infections spread through the droplet route include pertussis, mumps, and influenza. as most of these cases are unlikely to undergo elective surgery or procedures while symptomatic, the exposure may occur from undiagnosed cases or from people who are shedding organisms in the few days prior to symptoms. respiratory protection is indicated for known or suspect cases. immunization is strongly encouraged. (for the 2009-2010 influenza season, the cdc recommended use of an n95 for contact with patients with influenza-like symptoms. current recommendations can be found at http://www.cdc.gov/flu/professionals.) hand hygiene is indicated. because of their contact with blood and other body fluids, anesthesia providers may be exposed to viral pathogens such as hepatitis b or c and human immunodeficiency virus (hiv). it is difficult to determine the actual number of occupationally acquired blood-borne infections in the discipline. in a 1998 study among anesthesia personnel, the estimated average 30-year risks of hiv or hepatitis c virus infection per full-time equivalent was 0.049% and 0.45%, respectively. 36 in addition, there may be exposure to bacterial pathogens such as mrsa and clostridium difficile. the aana supports the cdc's recommendations to use standard precautions in the care of all patients. 28 in summary, standard precautions entail: consider all blood and body fluid as potentially infectious. use of personal protective equipment (ppe) (gloves, gowns, protective eye wear, and masks) when anticipating contact with blood or body. the ppe worn will depend on the task being performed and the possibility that splash or aerosolization can occur. handle and dispose of all needles and syringes properly. the practitioner should be aware of his or her facility's protocol for managing occupational exposure to blood and body fluid. body fluid exposures should be evaluated promptly to determine the need for antiviral or other prophylaxis. transmission-based precautions may be added for particular diseases that are highly transmissible or of epidemiologic importance. there are 3 categories: airborne isolation, droplet precautions, and contact precautions. the documented risk of infection related to anesthesia is low, yet the potential exists for serious infectious outcomes including death. the risk of infection can be minimized by adherence to hand hygiene, aseptic technique, safe infection practices, equipment decontamination, and use of ppe by all members of the anesthesia team. bacterial meningitis after intrapartum spinal anesthesia-new york and ohio acute hepatitis c virus infections attributed to unsafe injection practices at an endoscopic clinic-nevada transmission of hepatitis b and c viruses in outpatient settings clinical anesthesia. philadelphia: lippincott, williams and wilkins treatment of severe acute respiratory syndrome during intubation and mechanical ventilation the anesthesiologist's role in the prevention of surgical sire infection nosocomial infections and infection control in regional anesthesia post dural puncture meningitis incidence of epidural hematoma, infection and neurologic injury in obstetric patients with epidural analgesia/anesthesia neuraxial blockade in obstetrics and complications related to infection: can we lower the risk? park ridge (il): american society of anesthesiology newsletter guideline for isolation precautions: preventing transmission of infectious agents in healthcare settings practice advisory for the prevention, diagnosis, and management of infectious complications associated with neuraxial techniques: a report by the american society of anesthesiologists task force on infectious complications associated with neuraxial techniques intravascular device infection risk factors for peripheral intravenous catheter infection in hospitalized patients: a prospective study of 3165 patients infectious risk associated with arterial catheters compared to central venous catheters prospective study of peripheral arterial catheter infection and comparison with concurrently sited central venous catheters national healthcare safety network report (nhsn): data summary for multistate outbreak of serratia marcescens bloodstream infections caused by contamination of prefilled heparin and isotonic sodium chloride solution syringes guidelines for prevention of intravascular catheter-related infections american society of anesthesiologists. recommendations for infection control for the practice of anesthesiology aana condemns unsafe injection practice. press release risk factors for developing pneumonia within 48 hours of intubation endotracheal intubation: the role of sterility apic text of infection control and epidemiology guideline for disinfection and sterilization in healthcare facilities american association of nurse anesthetists. aana infection control guide for certified nurse anesthetists. park ridge (il): aana nosocomial contamination of laryngoscope handles: challenging current guidelines transmission of pathogenic bacterial organisms in the anesthesia work area hand hygiene compliance rates in the united states-a one year multicenter collaboration using product/volume usage measurement and feedback hand hygiene among physicians: performance, beliefs and perceptions reduction in intraoperative bacterial contamination of peripheral intravenous tubing through the use of a novel device apic text of infection control and epidemiology multicenter study of contaminated percutaneous injuries in anesthesia personnel key: cord-265595-55s19mr1 authors: brug, johannes; aro, arja r.; richardus, jan hendrik title: risk perceptions and behaviour: towards pandemic control of emerging infectious diseases: international research on risk perception in the control of emerging infectious diseases date: 2009-01-06 journal: int j behav med doi: 10.1007/s12529-008-9000-x sha: doc_id: 265595 cord_uid: 55s19mr1 nan in the beginning of 2003, the world was alarmed by the emergence of a new and apparently fatal infectious disease. the disease was labelled sars. thanks to enormous efforts made by national and international organisations, the epidemic was brought under control by the summer of that year. in recent years, the world has also been confronted with outbreaks or threats of outbreaks of other emerging infectious diseases such as avian influenza. to control new infectious diseases, the identification of the organisms, the infectivity, development of vaccines and therapies, contact tracing, isolation and screening may all be important. many of these issues are partly dependent on human behaviours. for example, the success of prevention of infectivity (e.g. engaging in precautionary behaviours such as wearing masks, hand hygiene, isolation etc.), vaccination, contact tracing and population screening are all more or less dependent on whether people at risk comply with behavioural recommendations. especially in the early phases of a possible epidemic, compliance to precautionary behaviours among the populations at risk is often the only means of prevention of a further spread of the disease. however, very little research has been conducted to explore the determinants of behavioural responses to infectious disease outbreaks [1, 2] . the present special series of the international journal of behavioral medicine is dedicated to such research. one of the six papers, i.e. by vartti et al. [3] , in this special series originated from international collaboration of behavioural scientists to study risk perceptions around sars during the sars outbreak. the aro et al. [4] paper represents early work related to risk perceptions among travellers during the avian influenza outbreak. three papers [5] [6] [7] were the result of a european commission funded project, called sars-control that was partly dedicated to exploring risk perceptions and risk communications related to sars and other emerging infectious diseases. severe acute respiratory syndrome (sars) was a new infectious disease due to an infection with a novel coronavirus, which was provisionally termed sars-associated coronavirus (sars-cov) [8] [9] [10] . the earliest cases of sars are known to have occurred in mid-november 2002 in guangdong province, china. sars was first recognised in late february 2003, when cases of an atypical pneumonia of unknown cause began appearing among staff at hospitals in guangdong, china and hanoi, vietnam [11] . within 2 weeks, similar outbreaks occurred in various hospitals in hong kong, singapore and toronto, and the number of worldwide cases exceeded 4,000 within 2 months, and 7,000 a few weeks later, with cases being reported from 30 countries. during the peak of the global outbreak, near the start of may 2003, more than 200 new cases were being reported each day. more than 900 people died from sars [12] . china was hit hardest, with over 5,000 patients and approximately 350 deaths. after july, sars appeared to be under control. although sars did not have the disastrous health impact that many at first feared, the panic caused by sars had an enormous economic impact in many countries because of the health fears and related control measures. the global travel, tourism and related industries in particular faced a significant downturn in income, although mostly temporary. the global macroeconomic impact has been estimated at 30 to 100 billion us dollars. although the european union was not afflicted heavily by the sars epidemic in terms of patient numbers, there was a large public concern related to the disease. while the dissemination of sars has been prevented in europe and controlled in all affected areas within a few months, this may not be the case for other emerging infectious diseases. for instance, the west nile virus was introduced in north america in 1999 and has been widely diffused since then despite very aggressive control efforts. in the usa in 2002, 4,156 cases were notified among whom 284 died [13] . severe infection can emerge in europe too. the h7n7 influenza episode among workers of the dutch poultry industry in 2003 [14] has shown that the potential for pandemic influenza to start within europe is there. circulation of a human strain at the same time as the zoonotic strains (h7n7) were circulating in the poultry worker population could have precipitated the emergence of a new strain adapted to humans with fast secondary diffusion. already in april 2003, an international psychosocial sars research consortium was formed initiated by professor george bishop at singapore university, which developed a survey instrument in several languages to probe awareness, knowledge, risk perceptions and precautionary behaviours related to sars. in 2004, a european union sponsored 3year research programme sarscontrol was started with collaborators from europe and china (partly building on the methods of the psychosocial sars research consortium) with the title "effective and acceptable strategies for the control of sars and new emerging infections in china and europe"; sarscontrol. risk perception and risk communication were themes in two out of nine work packages of the sarscontrol project. effective management of new epidemic infectious disease risks in the phase that no treatment or vaccination is yet possible is largely dependent on precautionary behaviour of the population. implementation of precautionary behaviour is largely dependent on effective risk communication, i.e. communication that induces realistic risk perceptions, correct knowledge and skills to promote and enable precautionary practices. scientific knowledge about these topics in the area of infectious disease control is scarce. neither is there knowledge if the theories and measures developed for risk perception research on, for example chronic diseases, can be applied in the area of infectious diseases. however, such knowledge is vital for effective control of newly emerging infectious diseases, because our ability to promote health protective behavioural change depends on our knowledge of important determinants of such behaviour [15] . for people to voluntarily engage in precautionary actions, they first of all need to be aware of the risk. risk perception is a central feature in many health behaviour theories. according to the protection motivation theory, for example [16] , protection motivation is the result of the threat appraisal and coping appraisal. threat appraisal consists of estimates of the chance of contracting a disease (perceived vulnerability or susceptibility) and estimates of seriousness of a disease (perceived severity). risk perceptions thus are important for precautionary actions, but risk perceptions are often biased [17] . unrealistic optimism about health risks is often observed related to familiar risks that are perceived to be largely under volitional control. such optimism may result in lack of precautions and false feelings of security. a pessimistic bias is more likely for new, unfamiliar risks that are perceived as uncontrollable. such unwarranted high-risk perceptions may lead to unnecessary mass scares, and are often combined with stigmatisation of specific risk groups. perceptions of risk are a necessary but often not sufficient condition for engagement is such behaviours. therefore, higher risk perceptions may only predict protective behaviour when people believe that effective protective actions are available (response efficacy) and when they are confident that they have the abilities to engage in such protective actions (self-efficacy). preliminary research on sars as well as avian influenza risk perceptions support these theorised associations and show inverse associations between risk perceptions and efficacy beliefs [2, 18] . furthermore, risk perceptions as well as efficacy beliefs in the early stages of a possible pandemic are dependent on communications with and between the members of the groups at risk. risk communication messages that are not comprehended by the public at risk, or communication of conflicting risk messages will result in lack of precaution-ary actions. communications that are perceived as coming from a non-trustworthy source may have the same results. however, risk communication messages are sometimes very quickly adopted by the media, possibly leading to an 'amplification' of risk information that may lead to unnecessary mass scares and unnecessary or ineffective precautionary actions. in the first paper of this special series, leppin and aro [19] provide an overview of the theoretical frameworks on which risk perception and infectious disease research is founded. leppin and aro first of all make a distinction between a more sociological and a primarily psychological approach to risk perception research. they conclude that the current risk perception research in infectious disease epidemics is seldom theory based or conceptually clear. this is understandable when doing first surveys in the early phases of new emerging epidemics, but there certainly is a need to do consolidate the theoretical and methodological research base. we also need to find out empirically if the theories and methods developed mostly for chronic diseases under volitional control of individuals can be directly applied in emerging epidemics. four of the papers present empirical mostly explorative original research on risk perceptions, knowledge, beliefs and other issues related to sars during or after the sars outbreak in 2003. de zwart and colleagues, in their eightcountry survey in 2005, almost 2 years after the sars outbreak, found out that perceived threat of sars in case of an outbreak in the country was higher than that of other diseases [7] . perceived vulnerability of sars was at an intermediate level compared to other diseases while perceived severity was high. perceived threat for sars varied between countries in europe and asia, but these differences did not appear to be associated with the proximity of the sars 2003 outbreak. vartti et al. [3] in their study during the sars outbreak found that despite the fact that both finland and the netherlands were unaffected by the outbreak the finns were more likely to be knowledgeable and worried about sars as well as to have low perceived comparative sars risk and poor personal efficacy beliefs to prevent sars. the finns were also more likely to have high confidence in physicians in the sars issues and less likely to have received information from the internet and have confidence in the internet information than the dutch. voeten et al. [5] and jiang et al. [6] studied the chinese communities in the netherlands and the uk because of their close communication and travel contacts with china, where the outbreak was the most severe. jiang and colleagues, in their qualitative study, revealed that information from affected asia influenced the perceived threat form sars and protective behaviour among the chinese in europe when more relevant local information was absent. when a high perceived threat was combined with low efficacy regarding precautionary measures, avoidance-based precautionary action appeared to dominate responses to sars. these actions may have contributed to the adverse impact of sars on the community. the voeten et al. [5] study results indicate that the chinese community members relied more on information from friends and chinese media and had less confidence in their doctor, government agencies and consumer interest groups. while their knowledge of sars was high, they reported a lower perceived threat and higher self-efficacy than general populations with regard to sars and avian flu, due to a lower perceived severity. the aro et al. [4] study, from the early phase of the avian influenza outbreak, found out that younger travellers and those on holidays are willing to take more health risks than those older or on business trips. the overall results indicate that people across europe and east asia do regard recently emerging infectious diseases as serious potential health threats, based on information they receive from a range of different sources, with clear differences between countries and regions. these differences appear not to be necessarily associated with proximity of an outbreak. it remains unclear if cultural differences or experience with an outbreak may explain these differences in risk perceptions and beliefs. given the clear and present danger of newly emerging infectious disease outbreaks in the near future and the importance of the public response and precautionary actions to control the spread, additional research on risk perceptions and other behavioural determinants is warranted. the present series of papers present a first qualitative and social-epidemiological exploration. more theory-driven and stronger designed longitudinal and experimental studies are needed to test some of the hypotheses touched upon in this issue. responding to global infectious disease outbreaks: lessons from sars on the role of risk perception, communication and management sars risk perception, knowledge, precautions, and information sources, the netherlands sars knowledge, perceptions and behaviors: a comparison between finns and the dutch during the sars outbreak in 2003 willingness to take travel-related health risks: a study among finnish tourists in asia during the avian influenza outbreak sources of information and health beliefs related to sars and avian influenza among chinese communities in the united kingdom and the netherlands, as compared to the general population in these countries the perceived threat of sars and its impact on precautionary actions and adverse consequences: a qualitative study among chinese communities in the united kingdom and the netherlands perceived threat, risk perception and efficacy beliefs related to sars and other (emerging) infectious diseases: results of an international survey coronavirus as a possible cause of severe acute respiratory syndrome a novel coronavirus associated with severe acute respiratory syndrome identification of a novel coronavirus in patients with severe acute respiratory syndrome cumulative number of reported cases of severe acute respiratory syndrome (sars) comparative susceptibility of selected avian and mammalian species to a hong kong-origin h5n1 highpathogenicity avian influenza virus transmission of avian influenza viruses to and between humans theory, evidence and intervention mapping to improve behavior nutrition and physical activity interventions cognitive and physiological processes in fear appeals and attitude change: a revised theory of protection motivation the precaution adoption process avian flu risk perception: europe and asia risk perception related to sars and avian influenza: theoretical foundations of current behavioral research key: cord-102776-2upbx2lp authors: niu, zhibin; cheng, dawei; zhang, liqing; zhang, jiawan title: visual analytics for networked-guarantee loans risk management date: 2017-04-06 journal: nan doi: 10.1109/pacificvis.2018.00028 sha: doc_id: 102776 cord_uid: 2upbx2lp groups of enterprises guarantee each other and form complex guarantee networks when they try to obtain loans from banks. such secured loan can enhance the solvency and promote the rapid growth in the economic upturn period. however, potential systemic risk may happen within the risk binding community. especially, during the economic down period, the crisis may spread in the guarantee network like a domino. monitoring the financial status, preventing or reducing systematic risk when crisis happens is highly concerned by the regulatory commission and banks. we propose visual analytics approach for loan guarantee network risk management, and consolidate the five analysis tasks with financial experts: i) visual analytics for enterprises default risk, whereby a hybrid representation is devised to predict the default risk and developed an interface to visualize key indicators; ii) visual analytics for high default groups, whereby a community detection based interactive approach is presented; iii) visual analytics for high defaults pattern, whereby a motif detection based interactive approach is described, and we adopt a shneiderman mantra strategy to reduce the computation complexity. iv) visual analytics for evolving guarantee network, whereby animation is used to help understanding the guarantee dynamic; v) visual analytics approach and interface for default diffusion path. the temporal diffusion path analysis can be useful for the government and bank to monitor the default spread status. it also provides insight for taking precautionary measures to prevent and dissolve systemic financial risk. we implement the system with case studies on a real-world guarantee network. two financial experts are consulted with endorsement on the developed tool. to the best of our knowledge, this is the first visual analytics tool to explore the guarantee network risks in a systematic manner. financial safety is a main concern of the government and banks. the majority of small and medium enterprises (smes) are difficult to get loan from the banks for their limited credit qualification, thus they often need to seek loan guarantees. in fact, guaranteed loan is already an important way to raise money in addition with seeking listed. in some developed economy like in us and uk, special government backed banks are established to provide guarantee credit [22, 27, 30, 40, 55] ; while in emerging economies like korea [19] and china [31] , it is more common that the corporations guarantee each other when they are trying to secure loans from lending institutions. it is reported that a quarter of the $13 trillion in total outstanding loans in china are guaranteed loans in 2014 [40] and there is an 18% year-to-year increase [36] . this has led to a noticeable new phenomenon: a large amount of corporations back each other and form complex guarantee networks. appropriate guarantee union may reduce the default risk but contagious damages over the networked enterprises may happen in practice. with the economic down period, large -scale breach of contract would hazard the banking asset quality deteriorated seriously and cause systematic crisis. although the loan guarantee network appeared for less than twenty years and it is still not well understood. the current financial academic community published some qualitative analysis works on small guarantee networks and there is few quantitative analysis research. in banking industry, the credit assessors evaluate an enterprise basically on the basis of classic credit rating approach. such a approach is not well suited for the complex benefit relationships. the risk management for the loan guarantee network is challenging: firstly, the loan guarantee network may consist of thousands of enterprises with complex guarantee relationships and intertwined risk factors, making it very difficult to analysis. fig. 1 illustrates a real guarantee network we constructed using ten years of bank loan records and it consists of more than 1000 enterprises, each of which has more than 3000 financial entries. monitoring the financial status is so difficult that usually only after capital chain rupture, can the regulators study the case in-depth. secondly, the fact that small and medium enterprise business operations (for example, the loan officers do not access to the enterprise net assets information) have inadequate transparency makes the loan risk evaluation more difficult. some borrowers fraudulently obtain loans using the faultiness of bank lending risk managements. the cognition to risk loan guarantee especially malicious guarantee is still relatively limited. thirdly, thousands of guarantee networks of different complexities coexist for a long period and evolve over time, this requires adaptive strategy to prevent, identification and dismantling systematic crisis. in the complex background of the growth period, the structural adjustment of the pain period and the early stage of the stimulus period, the structural and deep-level contradictions emerged in the economic development, all kinds of risk factors along the guarantee network accelerate the risk transmission and amplification, the guarantee network may be alienated from "mutual aid group" as "breach of contract". in this paper, we propose visual analytics approach for loan guarantee network risk management. it includes visual analytics for i) enterprises default risk; ii) high default groups; iii) high default pattern; iv) evolving guarantee network; and v) default diffusion path. in a nutshell, the main contributions are: 1. we consolidate with financial experts and identify five key research problems for loan guarantee network risk management, which is driven by emerging finance industry demands, and we believe this is an important research problem to the visual analytics science and technology community; 2. we propose intuitive visual analytics approaches for the tasks of i) enterprises default risk; ii) high default groups; iii) high default pattern; iv) evolving guarantee network; and v) default diffusion path. 3. we construct real loan guarantee network and perform empirical study on ten years of bank loan records. we highlight three high default patterns which are difficult to be discovered without visual analytic approach. we conduct interviews with two banking loan experts and got endorsed. the rest of the paper is organized as following: section 2 describes works involving different aspects related to our problem; section 3 details the five visual analytic tasks and our approaches; section 4 describe the data, case study; and we report user study results in section 5. conclusions and future works are described in section 6. to our best knowledge, this is the first work of visual analysis for the loan guarantee network risk management. we thus introduce several relevant work on network analytics in the financial domain; anomalous and significant subgraph detection in attributed networks; and works on financial security visualization. credit risk evaluation consumer credit risk evaluation is often technically addressed in a data-driven fashion and has been extensively investigated [5, 24] . since the seminal "partial credit" model [39] , numerous statistical approaches are introduced for credit scoring, including logistic regression [60] , k-nn [26] , neural network [18] , support vector machine [28] . more recently, [4] presents an in-depth analysis on how to interpret and visualize the learned knowledge embedded in the neural networks using explanatory rules. the authors in [32] combine debt-to-income ratio with consumer banking transactions, and use a linear regression model with timewindowed data set to predict the default rates in a short future. they claim a 85% default prediction accuracy and can save cost between 6% and 25%. financial network analytics financial crises and systemic risk have always been a major concern [9, 21] . networks or graph is a natural representation of the financial systems as they often bear complex interdependence and connections inside [2] . the relationship between network structure and financial system risk are carefully studied and several insights have been drawn: network structure has few impact for system welfare but plays an important role in determining systemic risk and welfare in short-term debt [3] . after the 2008 global financial crisis, network theory attracts more attention: the crisis brought by lehman brothers spreads on connected corporations in a similar infectious way as the epidemic of severe acute respiratory syndrome (sars) in 2002 -both are small damage that hits a networked system and causes serious events [8, 13] . the journal of nature physics organizes a special on how to understand some fundamental economic issues using network theory [1] . these publications suggest the applicability of network based financial model. for example, the dynamic network produced by bank overnight funds loan may be an alert of the crisis [13] . contrary to the conventional stereotype that large institutions are "too big to fail", the truth is the position of the institution in the network is equally and sometimes more important than its size [6] . more central the vertex is to the graph, more influential it is to the whole economic network when default occurs [13] . moreover, the research that aims to understand individual behavior and interactions in the social network, has also attracted extensive attention [7, 20, 46, 47, 61, 62, 67] . although preliminary efforts have been made using network theory to understand fundamental problems in financial systems [12, 17, 64] , there is little work on the system risk analysis in the loan guarantee network except for the preliminary work [41] . among them, may be the most important work is using k-shell decomposition to predict the default rate; positive correlation between the k-shell decomposition value of the network and default rates was reported [41] . anomalous and significant subgraph detection in network anomalous and significant subgraphs have been applied in many domains such as societal events in social media, new business discovery, auction fraud, fake reviews, email spams, false advertising [42, 54] . classic anomalous and significant subgraphs refer to subgraphs, in which the behaviors (attributes) of the nodes or edges are significantly different from the behaviors of those outside the subgraphs [48] . anomalous and significant subgraphs in social network can be used for early detection of emerging events such as civil unrest prediction, rare disease outbreak detection, and early detection of human rights events. the heterogeneous social network is modeled as a sensor network in which each node senses its local neighborhood, computes multiple features, and reports the overall degree of anomalousness. p-values of the subgraphs are used to represent the significance, and iterative subgraph expansion are used for the scaling problem [15] . emerging events such as crimes or disease cases are detected from spatial networks [34, 44] . a common challenge for the subgraph detection is the complexity. as many of the algorithms are turned into subgraph isomorphism problem which is n-p complete problem, it is computationally infeasible for naive search. algorithms are designed to optimize the performance. readers are referred to [43, 58, 59, 68] for more details. visualization in financial systems financial risk is a major concern of the government and the banks. visual analysis can enhance the understanding and communication of risk, help to analysis risks and prevent systemic risks. this is done by developing interpretable models, and and couple them with visual, interactive interfaces. in modern banking industry the business becomes more and more complex, the risk assessment and risk loan pattern detection have attracted a major concern. animation is used to visually analysis large amounts of time-dependent data [63] . in [29] , 3d tree map are introduced to monitor the real-time stock market performance and to identify a particular stock that produced an unusual trading patterns. interactive exploratory tool is designed to help the casual decision-maker quickly choose between various financial portfolios [50] . coordinated specific keywords visualization within wire transactions are used to detect suspicious behaviors [14] . the self-organizing map (som), a neural network based visualization tool is often used in financial risk visualization analysis, for monitoring the sovereign defaults occurrence in less developed countries [52] , visual analysis of the evolution of currency crises by comparing the clusters of crises between decades [51] , and discovering imbalances in financial networks [53] . self-organizing time map (sotm) are used to decompose and identify temporal structural changes in macro financial data around the global financial crisis in 2007 and 2009. readers are referred to [37] for more references on financial visualization. we consult with financial experts and consolidate five analysis tasks. in this section, we give an brief introduction in the first place before describing detailed algorithm, strategy and interactions. fig. 2 gives the overview of the system and tasks. we first construct the real loan guarantee networks from bank record, perform statistical analysis and employ machine learning based approach to predicate enterprise default risk. all these data are fit into the interface to finish the tasks proposed by financial experts. specifically, the tasks include: t1: visual analytics for enterprise default risk. the current internal loan credit rating system is based on the pure financial status of the individual borrower. credit assessor can usually access to the first layer of guarantee chain, and could not trustfully evaluate the entire guarantee network. in order to avoid inadequate risk assessment, it is necessary to carry out a systematic analysis of the enterprise. t2: visual analytics for high default group. identifying the high default groups helps the banking experts single out and tackle the principal default problem. visual analytics tools should be developed for thoroughly analyzing of the network, and recognize high defaults enterprises. t3: visual analytics for high default pattern. some known guarantee patterns may lead to default and diffusion, but there may exist more complex patterns which is difficult to be discovered. this task requires visualize the known risk guarantee pattern and able to explore other more complex risk guarantee patterns. t4: visual analytics for evolving guarantee network. like many other real networks, there are competitive decision making taking place in the guarantee network. understanding the network dynamic helps financial experts understand how the firms are connected together temporally. this task requires visualizing the guarantee network evolution based on history data. t5: visual analytics for default propagation path. before the crisis, forecasting the default diffusion path and monitoring the default spread status helps the government and bank take precautionary measures, conduct research, and take effective measures to prevent and dissolve risks, such that no regional or systemic financial risk occur. default risk predication the loan records reveal that the guarantee network and default rates are both increasing, and the network structures show strong correlation with the defaults. we construct feature vector consisting of hybrid information and employ supervised learning approach to train the prediction model. in what follows, we discuss the hybrid features used in our model. in order to build a highly representative feature which can reliably reflect the statistical relationships between the customers information and their repayment ability, we clean the data and construct the features as: basic profile, the essential company registration information, which reflects the character, capital, collateral, capability, condition and stability [41] . we use business nature, registered capital, enterprise scale, employee number and others as corporation's basic profile. most banks require company to update the basic information when the enterprise makes a loan application, and we choose to use the latest information as the basic profile features of the loan. credit behavior, historical behavior e.g. credit history, default records, default amount, total loan amount and loan count, total loan frequency (if any), total default rates. they are calculated by all the loan records before the active loan contract. active loan, the loan contract in its execution period. it contains active loan amount, active loan times, type of capital return and interest return etc. network structure, network features such as centralities are extracted as ns. note that as discussed above, the basic profile may be not completely trustworthy as the smes may provide out of date or even fake information to the bank. however, the guarantee network is trustable information as the bank can build it from its own record systems. the prediction of default for a customer's loan guarantee can be modeled as a supervised learning problem. we use logistic regression based on gradient boosting tree [23] for the predication. the tree ensemble model using k additive function to prediction output can be represented as: in eq. 1, f k is the k th decision tree, x i is the training feature andå· i is predication results.finding parameters of the tree model is turned into minimize the objective function problem and it can be trained in an additive manner [16] . where where â�� i l(å· i , y i ) is a training loss function measures the difference between the prediction and the target; â�¦( f ) is a smoothing regularization term to avoid over fitting. specifically, we use three-month window for training, observation, predication, and evaluation. as fig. 3 shows, in the training stage, for all customers who obtain bank loans from 2013 q1(first quarter 1. prediction shall be adapted to a dynamic setting with a regularly updated forecasting results. in fact, using sliding window is a typical way for rolling prediction as commonly adopted in event prediction practices such as [65, 66] . 2. the business often runs on a quarterly basis. thus from a business demand perspective, it would be helpful to know the borrowers who may be default on a quarterly basis. default risk visualization. we design and implemented visual interface enable to view the network with various multiple measurements. fig. 4 gives the interface, by which users can adjust the node size by the predicted default risk and by the following network centrality measurements: hub score and authority score, k-shell decomposition score, pagerank, eigenvector centrality scores, betweenness centrality, closeness centrality. fig. 5 gives a part visualization of a real guarantee network. in the graph, all defaulted enterprises are highlighted by red circles. node size proportional to predicted risk (a), k-shell value (b), and authority score (c). through the interface, users can also observe the rolling prediction risk of an enterprise over month and highlight it on the whole network by choosing it on the heatmap. recognizing high default groups narrow down the risk guarantee relationship search scope and enable financial experts focus on firms with high-default crowd. usually, community detection divides the guarantee network into groups (communities) based on how the nodes are connected together. theoretically, community structure in graph is defined as the node set internally interacts with each other more frequently than with those outside it. identifying such sub-structures provides insight into understanding the structure of complex networks (both functions and topology affect each other) [57] . based on the conjecture that defaults occur in clusters, we first divide the whole network into several disjoint sets by community detection. fig. 6 (a) shows the results on a typical independent subgraph we constructed from the bank loan records. the communities are marked using separate color background and average default rates are labeled. there are 30 communities, but the default occurs on four of them with average 38% to 8.6% defaults rates, all other 9 communities have no default during the guarantee network existence. similar phenomenon are observed on random walks, edge betweenness, and spinglass community. in practice, we first use random walk algorithm [45, 49] to divide the whole guarantee networks into groups. we use a revised treemap interface to visualize the community detection results. the community label and default rates are displayed on the flat colored blocks.the treemap chart used for navigation here, thus the sum of area does not necessarily to be one. the larger blocks reveal the high default communities saliently. however, the evaluation of community detection is still an open question [35] , and the community detection algorithm only considers the link information and neglect node attribute information, the partition may not be consonant to the actual conditions. the basic rule for community detection is to minimize the number of links between communities and this uses pure network structure information. in financial practice, each node in the network comes with rich information such as enterprise sectors, changes in deposits, assets, loan amount, etc,. it would be unreliable discarding such attributes when dividing the network. by interaction, we enable the users to edit the communities into coherent ones by referring to relevant financial matric. we allow users to interactively perform the following manipulation actions. interactive community editing. we enable users to explore the financial information and interactively edit the communities by merging strong associated communities, reassign the community labels for the structural hole spanners, a key role in the information diffusion [11] or split a community into several disjoin smaller groups. the generated subgraph are noted as group of interest (goi), the high risk guarantee pattern are often hidden in the goi. reassign. the reassign operation allow to the change the community labels of the structure hole spanner. structure hole spanner is the bridge node which connect different communities in a network. fig. 7 is reproduced from [25] , and it illustrates a network with three communities and six structural hole spanners. empirical study suggests that individuals would benefit from filling the "holes" (called as structural hole spanners) between people or groups that are otherwise disconnected [10] . principled methodology to detect structural hole spanners from a given social network are still not clear [38] . in fact, we observed high default on structure hole spanners with their neighbouring internal nodes. we enable the users can investigate the financial matric and reassign the community labels of the structure hole spanners. specially, when the user wish to merge two adjacent communities, he/she firstly double clicks one block on the tree map, all the other connected commonties are highlighted. single clicking the structure hole spanner node can reassign it into the opposite community. for example, when community c 1 and c 2 are chosen, single click node a, both communities will be merged as c 1 , and vice versa. merge. neighbouring communities can be merged. as the community detection divides a graph purely based on links in the graph, algorithm may generate too many communities where some of them share common sector category or similar network structures. merging the communities referring the financial matric can produce medium size and more tractable subgraphs. specifically, when the user wish to merge two adjacent communities; he firstly double click one "tile" on the tree map, all other the connected commonties are highlighted. double click the structure hole spanner node can merge the two commonties together and labeled as the clicked community. split. sometimes, we need to split the community into several parts. this happen when there exists when the default unevenly distributed. we can cut off the stable parts and this may reduce the moi computation complexity. specifically, when the user wish to split the community; he firstly double click one "tile" on the tree map, all other the connected commonties are highlighted. double click the edge, the two opposite parts of the subgraph will be split into two communities. financial information is useful. we use a financial radar chart to encode the key financial status under the tree map. specially, the key indices include: defaults, historic default behavior; la/rc the ratio of loan amount to registered capital. it would more insightful using the ratio of loan amount to enterprise net assets, however, as the latter one is not always available. we use registered capital instead. deposit loss the percentage of deposit loosing. the shorting of money and rapid decrease of deposit should not be ignored. sector the enterprise sector is also important clue when editing communities. ga/rc the ratio of guarantee amounts to registered capital. as the loan guarantee is an obligation of a borrower if that borrower defaults, the ratio of guarantee amount to enterprise net assets is a crucial factor for the financial systematic stability. similarly, we use registered capital instead. credit rating it is the review rating of bank expert, which is also a key clue when editing communities. usually, high default pattern discovery is not possible by observation as a practical loan guarantee network may consist of several tens of thousands nodes; nor does it via algorithms -naive subgraph mining from the network led to isomorphism problem which is proved to be np-complete problem. we adopt a shneiderman mantra strategy to reduce the computation complexity. guarantee circle visualization. the small and medium firms improve their borrowing capacity by a third party guarantor. empirical studies by bank risk control specialist suggest the guarantee circle is a source of default risk. the most frequently used guarantee circle patterns include mutual guarantee, joint liability guarantee, star shape guarantee loan,and revolving guarantee (see fig. 8 ). such interactions are legal in china currently. they can enhance the solvency level to some extent but may induce occurrence of risks and transmission of risk pointed by financial regulatory documents. often the specialists in the bank risk control department have only sql query capability to find relative simple guarantee pattern. in this work, we enable automatically guarantee circle detection and visualization -the common recognized risk loan patterns including mutual guarantee, co-guarantee, and revolving guarantee are highlighted on the network. fig. 9 gives an example of revolving loan guarantee detected from a real-world loan guarantee network. users are able to focus on the relevant firms and explore more details. besides, there are five firms default among the eleven firms in the three revolving structure, informing the banking experts to pay more attention on the firms involved in such patterns. new risk pattern discovery. as mentioned above, guarantee circles are relatively clearly understood by banking experts. however, they still can not quite understand does there exist more complicated guarantee patterns that may have implicit connections with high default phenomenon. we develop a visual analytics tool to help the experts discover and understand what have happened. the task is challenging: arbitrary guarantee pattern which has high default rate can be underneath the complex network structures. it is impossible to exhaustively compare all network patterns to determine whether it is in high default. based on the conjecture that defaults occur in clusters, we propose an interactive shneiderman mantra strategy [56] to narrow down the risk guarantee pattern searching space. fig. 2 gives the processing flow. because the goi are groups with high default rates, there may exist guarantee patterns which are prone to default. usually, the motifs are the most basic building blocks for a network and the number of structures are limited. motifs may reflect functional properties and provide a deep insight into the networks. a complex guarantee network is always connected by several smaller subgraphs bridged by the structural hole spanners. the sub-graphs inside the communities may reveal certain risk even fraud pattern. in this work, we obtain a set of motifs by first detecting motifs from the goi. the motifs are ranked by their default rates (eq. (4) ). among them, high default rate motifs are noted as pattern of interest (poi) and they may need be investigated by banking experts in priority. where m is a motif. all motifs are possible risk loan guarantee patterns. however, it is still computationally challenging to obtain all pois by the approach above for the following reasons. firstly, motif structures increases with the node number increase rapidly, for example, 4 node motif has over 3000 possibilities. it is impossible to enumerate all motif structures. secondly, motif matching is exhaustively searched from the query graph into the large network, and it is essences subgraph isomorphism problem. it still takes too much time for motifs with more nodes to be matched on the network. in this work, we propose an interactive motif editing approach. users can further explore the financial information of adjacent nodes and add them to the motifs and generate poi. network evolution over time is observed from the guarantee network. the topology of the network keeps changing -some nodes are connected to the network or removed from it, some communities are connected together through the guarantee of the structural hole spanner. like many other real networks, there are competitive decision making taking place in the guarantee network: when a firm lack security to obtain a loan from the bank, it may resort to a guarantee corporation or thirty party firms. to some extent, the new guarantors may improve the overall system rationality but also may induce unstable factor as the network becomes even more complex. understanding the network dynamic helps to financial experts understand how the firms are connected together temporal. in this work, we use animation to visualize the evolving of guarantee network. users can drag the time bar to backtrack how the network evolve over time. they are allowed to hover mouse cursor over the node to view the company's financial information. this will help the financial experts understand what has happened historically. fig. 10 gives an example how a real network evolve from july 2013 to april 2014. combining enterprises financial status of different time, financial experts would be able to make analysis. financial systematic risks is a top concern for the government and banks, however, as a new phenomenon, the understanding to the systematic risk of the loan guarantee network is still not sufficient. sophisticated guarantee relationships tend to cause credit granted by multiple lenders and excessive credit. in the loan guarantee, a guarantor has the debt obligation if the borrower defaults, if the guarantee could not payback to the back, it may resort to its guarantors. in this case, the default may propagate like virus.the default contagious increases the possibility of occurrence of risks and transmission of risks. especially in the economic downturn, some enterprises face operation difficulties and the financial crisis will have a domino effect: the default phenomenon may spread rapidly in the network, and this will make a large number of enterprises fall into unfavorable situation. the government and the banks always wish to monitor the default spread status and understand the complexity of the current issue of risks before they can take precautionary measures, conduct research, and take effective measures to prevent and dissolve risks, to ensure that no regional or systemic financial risk occur. based on the relevant knowledge and experience, we develop the visual analytics tool to aid the default path discovery by visualization. a principle of the default diffusion can be described as the vulnerable nodes are the guarantors. fig. 11 gives a diffusion path illustration. (a) is a guarantee network with eight nodes, where node e provides guarantee to five adjacent nodes and c, d provide guarantee to b and then to a; (b) is the possible diffusion path, the default of node a may lead to the b, c, d even e default. it is noted that node g, f, h are not connected with node e, as the default of e will not affect the repayment status of g,f, and h. in practice, there may be multiple possible propagation path as each node can serve as guarantor or get guaranteed. it is difficult to outline the main propagation path from the entire. we make the following assumption: the node on multiple propagation pathes is the key to prevent large scale default diffusion and thus should be highlighted. we compute all the propagation pathes, count occurrences and highlight the node on the network. we use the color to illustrate the propagation risk importance. we design the visual analytics tool which enables financial experts take into account of several factors on the judgment of defaults. the factors include the financial information of the corporation and guarantee contract amount information. the former information is plain listed when the user hovers the mouse pointer on the node, while a sankey diagram is used to represent the guarantee flow. the widths of the sankey diagram bands are directly proportional to the guarantee amount. fig. 12 (a) gives results on a real guarantee network, when we choose one node, for example, node 32, the whole potential propagation path is highlighted in (b), and (c) is the corresponding sankey diagram. it can be seen that upstream companies usually provides more guarantee than they received. for example, node 18 provides much more guarantee than it receives. the imbalance of guarantee amount and collateral amount provide clue for the credit line assessment. the real situation is even much more complex. the default may be diffused like a virus infection and the virion must identify and bind to its receptor (guarantor). as mentioned earlier, each enterprises has more than 3000 financial entries, it is difficult to quantify anti-infective ability for each enterprises. we enable users to look up multiple financial status and cut off the propagation path. we also note that the propagation model provides more insights to end users and we plan to perform in-depth study for the topic and provide simulation interface in the future. we collect loan records spanning ten years from a major commercial bank in china. the names of the customers in the records are encrypted and replaced by an id; we can access the basic profile like the enterprise scale, the loan information like the guarantee id and loan credit. we first introduce the loan process, and then explain how the information are extracted and cleaned. the banks need to collect as much fine-grained information as possible, concerning the repayment ability of the enterprise. the information falls into four categories: transaction information, customer information, asset information such as mortgage status, history loan approval bank side record, etc. the most relevant to the loan guarantees are eight data tables: customer profile, loan account information, repayment status, guarantee profile, customer credit, loan contract, guarantee relationship, guarantee contract, default status. there are often more than one guarantors for one loan transaction, and there may be several loan transactions for a single guarantor in a period. once the loan is approved, the smes usually can obtain the full size of loan immediately, and start to repay to the bank regularly by an installment plan until the end of the loan contract. in the record preprocess phase, by joining the nine tables, we obtain records related to the corporation id and loan contracts. we then construct the guarantee network and compute the network related measurements. we now report the observations derived from the data. overall statistics there are 11,000 loan customers, which span 60,948 mutual guarantee relationships derived from 36,618 loan contracts. there are 5,911 defaults during the past ten years, out of the total 87,307 repayments. the overall default rate to the number of contracts is 6.77%. centrality indicators are helpful to identify the relative importance of nodes in the network. fig. 13 gives the histogram of several most complex subgraphs on how the defaults distributed with different centrality indicator values. it is noted defaults happen more on nodes with large authority value and small hub values. this is consistent with intuition -the enterprise works as the hub ones back a large number of other corporations and it is supposed to be relatively stable and operates in good condition. in contrast, the enterprise works as the authority ones and accepts guarantee from many other corporations and this means they lack funds security and have higher risk in trouble. the statistics indicate the lender to watch the status of the "authority" high nodes in the guarantee network. although the underlying assumption of pagerank is quite alike authority score, we did not observe similar correlation between the values and default rates (see fig. 13 ). it is observed that the larger the centrality the higher default rates. the tasks were as follows: (1) visual analytics for high default groups; and (2) visual analytics for high default pattern. the first case study is to find high default groups. the random walk community detection algorithm divides the guarantee network into 36 communities. the statistics are given in table 1 . we edit the community following basic guidelines: (1) consider default status, loan amount and other financial statistics comprehensively; (2) mall communities can be either merged with its neighbouring large communities or pruned. for example, community 35,34 both have 4 nodes and these firms never default. there is low possibility they will become high default groups in the future; while the community 23 be merged with the neighbouring communities. (3) structural hole spanner nodes should be paid special attention. usually, there are defaults happen on the structural hole spanners, the adjacent communities can be merged. finally, we obtain ten communities and seven of them has relative high default rates as table 2 . the seven medium sized groups of subgraphs which can be efficiently processed for further tasks. it is noted that the merge and reassign operation are based on the user expertise. as the user may choose various criteria, the final tree map can demonstrate different combinations and default rates. in this subsection, we explore high default patterns beyond guarantee circles. it includes (1) automatic motif detection from high default groups. specifically, we employ the gtriescanner (http://www.dcc.fc.up.pt/gtries/) approach. (2) matching the motifs with the entire network and calculate the ratio for default firms. (3) ranking the motifs in descending defaults order, and they are high default patterns. (4) the user interactively edits the high default patterns by adding more nodes, and the system will automatically match the new subgraph with the entire network and produce the ratio for default firms. theoretically, there are 199 and 9,364 possibility combinations for 4-and 5-vertex-motifs [33] for a directed network, respectively. matching all those motifs on the whole network would be time-consuming. the user interactively editing motifs helps more efficient to explore new patterns. in practice, we choose to analyze community 3, which consists of 103 enterprises; 36% of them default the 85% loans from the bank, as table 2 shows. fig. 15 gives the twenty 4-vertex-motifs automatic algorithm detected from community 3, and table 3 shows the statistical information. although there are nearly 200 kinds of 4 vertex node motif shapes, there are only 20 existing in the high default group. we thus perform analysis on the 20 motifs instead of every shape. the detailed motif shapes are given in fig. 15 . most of the them have rather complex structures, however, some of them are known to banking experts, for example, motif 6 is joint liability loan. some others can be understood by a combinations of smaller guarantee patterns. for example, motif 5 is a combination of joint liability with a single guarantee. three of the motifs, motif 15, 16, and 17 attracted our attention. (1) high default rates for the patterns (ranging from 61% to 90% in ratio for default firm and 55% to 100% in ratio for default amount); (2) relative small number of instances (4 or 5) are detected from the whole network. besides, (3) the top five risk motifs show single input, single output, feed forward structures. fig. 16 gives all the pattern 15 instances detected from the entire network. some of motif instances coincide together. these three patterns are interesting, for example, pattern 15 recurrent for five times in a group, the bank lost all the money lend to the enterprises with such guarantee structures(see table 3 ). there is high possibility that fraud loan guarantee may happen for several times; and local bank failed to recognize the fraud pattern. similar analysis implies pattern 16 and 17 may be also guarantee patterns with high default. we then conduct interviews with two banking loan experts. the first one comes from the financial regulator. the expert has more than five years of guarantee network research experience and has published several important investigation reports and books on the chinese loan guarantee network status. the second one comes from a major commercial bank credit department; who has ten years of loan approval experience. both experts are attracted by and understand the visualization guarantee relationships immediately. the first expert is rather interested with the community editing. he said they try to resolve the financial risks in guarantee network, a major operation is to split the loan guarantee network into smaller ones with risks isolated. in this case, health enterprises will not be affected by financially risk enterprises. the editing function of our tool provides them a powerful weapon to achieve their target. besides, the expert also has interest in the risk guarantee pattern discovery module, and he agrees the significant value provided by the finding of such risk patterns. there might exist illegally convey benefits under the suggested high default patterns. the expert will also dive into the financial disclo-sures of the risk guarantee enterprises and examine whether fraud guarantees are happening. the second expert expressed that he has never grasp the whole intercalations between enterprises so clear when assessing a loan. the expert claims the tree map gives an intuitive understanding about the guarantee groups. we present visual analytics approach for loan guarantee network risk management in this paper. to our best knowledge, this is the first work using visualization analysis approaches to address the guarantee network default risk issue. we design and implement interactive interface to analysis the individual enterprises default risk, high default groups, patterns in the group, network evolution and default diffusion path. the analysis can help the government and bank monitoring default spread status and provides insight for taking precautionary measures to prevent and dissolve systemic financial risk. future work will include computational modeling of default diffusion and visual analytics for taking precautionary measures. net gains networks in finance financial connections and systemic risk using neural network rule extraction and decision tables for credit-risk evaluation benchmarking state-of-the-art classification algorithms for credit scoring debtrank: too central to fail? financial networks, the fed and systemic risk network analysis in the social sciences complex financial networks and systemic risk: a review bubbles, financial crises, and systemic risk structural holes and good ideas1 secondhand brokerage: evidence on the importance of local structure for managers, bankers, and analysts the making of a transnational capitalist class: corporate power in the twenty-first century. zed books network opportunity wirevis: visualization of categorical, time-varying data from financial transactions non-parametric scan statistics for event detection and forecasting in heterogeneous social media graphs xgboost: a scalable tree boosting system social network, social trust and shared goals in organizational knowledge sharing. information & management a comparison of neural networks and linear scoring models in the credit union environment analysis of loan guarantees among the korean chaebol affiliates social network sites: definition, history, and scholarship rasch models: foundations, recent developments, and applications the effect of credit scoring on small-business lending greedy function approximation: a gradient boosting machine. annals of statistics statistical classification methods in consumer credit scoring: a review joint community and structural hole spanner detection via harmonic modularity a k-nearest-neighbour classifier for assessing consumer credit risk. the statistician hmrc, department for business innovation & skills. 2010 to 2015 government policy: business enterprise credit scoring with a data mining approach based on support vector machines. expert systems with applications a visualization approach for frauds detection in financial market determinants of the guarantee circles: the case of chinese listed firms determinants of the guarantee circles: the case of chinese listed firms consumer credit-risk models via machine-learning algorithms network motif detection: algorithms, parallel and cloud computing, and related tools a spatial scan statistic benchmark graphs for testing community detection algorithms china faces default chain reaction as credit guarantees backfire modelling dependence with copulas and applications to risk management mining structural hole spanners through information diffusion in social networks a rasch model for partial credit scoring loan 'guarantee chains' in china prove flimsy credit risk evaluation for loan guarantee chain in china efficient anomaly detection in dynamic, attributed graphs: emerging phenomena and big data fast subset scan for spatial pattern detection detection of emerging space-time clusters finding and evaluating community structure in networks complex networks in the study of financial and social systems the lifecycle and cascade of wechat social messaging groups anomaly detection in dynamic networks: a survey maps of random walks on complex networks reveal community structure finvis: applied visual analytics for personal financial planning clustering the changing nature of currency crises in emerging markets: an exploration with self-organising maps sovereign debt monitor: a visual self-organizing maps approach chance discovery with self-organizing maps: discovering imbalances in financial networks anomaly detection in online social networks the eyes have it: a task by data type taxonomy for information visualizations general optimization technique for high-quality community detection in complex networks scalable detection of anomalous patterns with connectivity constraints penalized fast subset scanning a credit scoring model for personal loans relational learning via latent social dimensions community detection and mining in social media applying animation to the visual analysis of financial time-dependent data using social network knowledge for detecting spider constructions in social security fraud sales pipeline win propensity prediction: a regression approach towards effective prioritizing water pipe replacement and rehabilitation evaluation without ground truth in social media research graph-structured sparse optimization for connected subgraph detection figure 14 : high default groups after interactive editing. motif key: cord-204125-fvd6d44c authors: chowdhury, muhammad e. h.; rahman, tawsifur; khandakar, amith; al-madeed, somaya; zughaier, susu m.; doi, suhail a. r.; hassen, hanadi; islam, mohammad t. title: an early warning tool for predicting mortality risk of covid-19 patients using machine learning date: 2020-07-29 journal: nan doi: nan sha: doc_id: 204125 cord_uid: fvd6d44c covid-19 pandemic has created an extreme pressure on the global healthcare services. fast, reliable and early clinical assessment of the severity of the disease can help in allocating and prioritizing resources to reduce mortality. in order to study the important blood biomarkers for predicting disease mortality, a retrospective study was conducted on 375 covid-19 positive patients admitted to tongji hospital (china) from january 10 to february 18, 2020. demographic and clinical characteristics, and patient outcomes were investigated using machine learning tools to identify key biomarkers to predict the mortality of individual patient. a nomogram was developed for predicting the mortality risk among covid-19 patients. lactate dehydrogenase, neutrophils (%), lymphocyte (%), high sensitive c-reactive protein, and age acquired at hospital admission were identified as key predictors of death by multi-tree xgboost model. the area under curve (auc) of the nomogram for the derivation and validation cohort were 0.961 and 0.991, respectively. an integrated score (lnlca) was calculated with the corresponding death probability. covid-19 patients were divided into three subgroups: low-, moderateand high-risk groups using lnlca cut-off values of 10.4 and 12.65 with the death probability less than 5%, 5% to 50%, and above 50%, respectively. the prognostic model, nomogram and lnlca score can help in early detection of high mortality risk of covid-19 patients, which will help doctors to improve the management of patient stratification. the novel coronavirus disease (covid-19) spread rapidly throughout the world from wuhan (hubei, china) since december 2019 [1] [2] [3] [4] [5] . since the outbreak, the number of reported cases has surpassed 12 million with more than 550 thousand deaths worldwide as of 12 july 2020 [6] . the covid-19 disease is caused by the severe acute respiratory syndrome coronavirus 2 (sars-cov-2), which is a member of the coronavirus family. on 11 march 2020, covid-19 was declared as a pandemic by the world health organization (who) [7] . due to the pandemic, hospital capacity is being exceeded in many places and face issues in terms of limited medical staff, personal protective equipment, life-support equipment and others [8, 9] . symptoms of covid-19 are nonspecific, and infected individuals may develop fever (83-99%), cough (59-82%), loss of appetite (40-84%), fatigue (44-70%), shortness of breath (31-40%), coughing up sputum (28-33%) or muscle aches (11-35%) [10]. the disease can further progress into a severe pneumonia, acute respiratory distress syndrome (ards), myocardial injury, sepsis, septic shock, and even death [11] . though most covid-19 patients have a mild illness, there are some patients who show rapid deterioration (particularly within 7-14 days) from the onset of symptoms into severe covid-19 with or without ards [12, 13] . current epidemiological data suggest that the mortality rate of patients with severe covid-19 is higher than that of patients with non-severe covid-19 [14, 15] . it has been reported that 26.1-32.0% of patient infected with are prone to progressing critical illness [16] . recent studies have confirmed a high fatality rate of 61.5% for patients in critical cases, which increases with age and other medical comorbidities [16] . a large cohort study from 2449 patients has reported that during this pandemic healthcare system can be overwhelmed by hospitalization (20-31%) and intensive care unit (icu) admission rates (4.9-11.5%) [17] . this can be avoided by prioritizing hospital treatment for patients at high risk of deterioration and death, and treating low-risk patients in ambulatory environments, or by home-based self-quarantine. an effective tool is required to predict the disease trajectory to allocate resources efficiently and also improve the patient's condition. understanding the great potential of this approach, it is important to identify key patient variables that can help to predict the course of the disease at diagnosis. in other words, early identification of patients at high risk for progression to severe covid-19 will help in efficient utilization of healthcare resources via patient prioritization to reduce the mortality rate. several researches indicate that biomarkers can help to classify covid-19 patients with elevated risk of serious disease and mortality by providing crucial information regarding the patients' health status. al youha et al. [18] proposed a prognostic model called the kuwait progression indicator (kpi) score for predicting progression of severity in covid-19. the kpi model was based on quantifiable laboratory readings unlike other self-reported symptoms and other subjective parameters based scoring systems. the kpi score categorizes patients to low risk if the score goes below -7 and high risk if the score goes above 16, however, the progression risk in the intermediate group (for patients scores within -6 to 15) deemed by the authors as uncertain. this intermediate category however exists with many prognostic systems. weng et al. [19] reported an early prediction score called andc to predict mortality risk for covid-19 patients using 301 adult patients' data. lasso regression has identified age, neutrophil-to-lymphocyte ratio (nlr), ddimer, and c-reactive protein recorded during admission as mortality predictors for covid-19 patients [19] . they have developed a nomogram demonstrating good performance and also derived an integrated score, andc, with its corresponding death probability. they have also developed cutoff andc values to classify covid-19 patients into three groups: low, moderate and high-risk groups. the death probability were 5%, 5% to 50% and more than 50% in the low-, moderate-and high-risk group, respectively. using a cohort of 444 patients, xie et al. [20] proposed a prognostic model using lactate dehydrogenase, lymphocyte count, age, and spo2 as key-predictors of covid-19 related death. the model showed good discrimination for internal and external validation with c-statistics of 0.89 and 0.98 respectively. (c=0ꞏ98) validation. even though the model shows promising performance for internal calibration, however, external validation showed over and under-prediction for low-risk and high-risk patients respectively. yan et al. [21] reported a machine learning approach to select three biomarkers (lactic dehydrogenase (ldh), lymphocyte and high-sensitivity c-reactive protein (hs-crp)) and using them to predict individual patients mortality, 10 days ahead with more than 90 percent accuracy. in particular, high levels of ldh alone have been found to play a crucial role in identifying the vast majority of cases, which require immediate medical attention. however, there is no scoring system reported in this work, which can help the clinicians to identify the patients under risk quantitatively. another clinical study on 82 covid-19 patients showed that respiratory, cardiac, hemorrhage, hepatic, and renal injury had caused the death of 100%, 89%, 80.5%, 78.0%, and 31.7% patients respectively. most of the patients had increased crp (100%) and d-dimer (97.1%) [22] . the value of d-dimer as a prognostic factor was also shown to significantly increase odds of death if the amount is greater than 1 μg ml −1 upon admission [23, 24] . although several predictive prognostic models are proposed for the early detection of individuals at high risk of covid-19 mortality, a major gap remains in the design of state-of-the-art interpretable machine learning based algorithms and high performance quantitative scoring system to classify the most selective predictive biomarkers of patient death. identifying and prioritizing those at severe risks is important for both resource planning and treatment therapy. moreover, the high risk patients should be possible to continuously monitored using a reliable scoring tool during their hospital stay-time. likewise, reducing patient admission with very low risk of complications that can be handled safely by self-quarantine will help to minimize the pressure on healthcare facilities. therefore, using state-of-the-art machine learning algorithm, an early prediction scoring system was developed and also implemented to classify the most discriminatory biomarkers of patient mortality. the problem was initially introduced as a classification problem for determining the most appropriate biomarkers at the end of the test period with the aid of corresponding survival or death outcomes. the top ranked features with the best classification performance were used to develop a multivariable logistic regression-based nomogram and validated for the prognosis of death and survival. the findings obtained through this study provides a simple, easy-to-use and reliable algorithm for the prognosis of high-risk individuals and possess potential for clinical application. blood samples collected between 10 january and 18 february, 2020 from 375 patients in wuhan, china were retrospectively analyzed to identify reliable and relevant markers of mortality risk. medical records were collected using standard case report forms, which included information on epidemiological, demographic, clinical, laboratory and mortality outcomes. yan et al. [21] has published the dataset along with the article and the original study was approved by the tongji hospital ethics committee. patients' exclusion criteria for the study were: age (<18 years), pregnant, breastfeeding and missing data (>20%). out of 375 patients, 187 (49.9%) had fever while cough, fatigue, dyspnea, chest distress and muscular soreness were present in 52 (13.9%), 14 even though multiple blood sample data of the patients were available, only the data from the first sample were used as inputs for model training and validation to identify the key predictors of the disease severity. the model also helps in distinguishing patients that require immediate medical assistance. research using clinically captured data often suffers from missing data challenge leading to either bias introduction or negative impact on analytical outcomes. simple approach to handle this challenge is deleting the respective rows of data from further analysis. this simple approach is not very useful as it leads to loss of valuable information that would have been beneficial in the analysis and also can lead to biased estimates [25] . (1) where ℎ. a diagnosis nomogram was constructed by alexander zlotnik's nomolog [29] , based on multivariate logistic was carried out to identify the threshold values in which nomograms were clinically useful, using stata software. the parameters were drawn as a numerated horizontal axis scale and the values for the patient are put on the numerated scale. a vertical line was drawn down from the different parameter numerated arranged scales downward to a score axis. all five scores on the score axis were added to make a total score and this was linked to a death probability. it can be noted that according to the nomogram, higher score corresponds to a higher death probability. the model was designed using the initial blood sample of the patients. however, it can be applied to the biomarkers collected in later during the hospital stay period of the patients to predict death probability longitudinally using the lnlca score. of the 375 patients, 174 (46.4%) died, while 201 (53.6%) patients recovered from covid-19 and were discharged from hospital. figure 1 to determine the independent variables associated with death, univariate logistic regression analysis was performed with top-1, top-2, and up to top-10 features identified using two different techniques. it is clear from the figure 3 that top ranked 5 features produced highest auc of 0.97 for data imputed using mice algorithm while top-ranked 3 features produced highest auc of 0.95 for the data imputed using -1 ( figure 3 ). table 2 shows the overall accuracies and weighted average performance for other matrices for different models using top 1 to 10 features for 5-fold cross-validation using the logistic regression classifier along with the confusion matrices for each case. a multivariate logistic regression based nomogram for predicting early covid-19 mortality was built using topranked five biomarkers that were found important both statistically and using ml based classifier (as shown in table 1 , 2 and figure 3 ). the relationship between linear prediction of death and these biomarkers was evaluated using multivariable logistic regression which was reported in table 3 . regression coefficient, z-value, standard error and its statistical significance along with 95% confidence interval were shown in table 3 . z-value is the ratio of regression coefficient and its standard error. typically z-value indicates the strong and weak contributors in logistic regression. the the corresponding probability of death for a given lnlca score was determined from the model and is listed in figure 7 shows an example nomogram based scoring system for a covid-19 patient with the variable values at admission. individual score for each predictors were calculated and added to produce total score and death probability was calculated to 80%. this can be done as early as 9 days before the death of the patient. furthermore, we have categorized the patients from training and testing subgroups into three subgroups (low, moderate and high-risk) by associating actual outcome with the predicted outcome using the lnlca score. for training set ( table 4 and prioritize the moderate and high risk group patients. there were 52 patients in the test set who had an outcome of death after different duration of hospital stay. some patients were hospitalized in very late stages while some other patients were admitted in the early stages. the minimum, maximum, 262 (100.0%) p-value among three group is less than 0.001 p-value of low-risk group vs moderate-risk group is less than 0.001. p-value of low-risk group vs high-risk group is less than 0.001. p-value of moderate-risk group vs high-risk group is less than 0.001. 113 (100.0%) p-value among three group is less than 0.001 p-value of low-risk group vs moderate-risk group is 0.0037. p-value of low-risk group vs high-risk group is less than 0.001. p-value of moderate-risk group vs high-risk group is less than 0.001. age was identified as a key predictor of mortality in previous studies on coronavirus family such as sars [30] , middle east respiratory syndrome (mers) [31] and covid-19 [32] . this study has also concluded similar findings and this is because with the older age the immunosenescence and/or multiple medical conditions tend to make patients more prone to critical covid-19 illness [19] . yan et al. [16] showed that in patients with severe pulmonary interstitial disease, there is a significant increase of ldh and can be associated with indications for lung injury or idiopathic pulmonary fibrosis [33] . consistent results from the previous research were also found in this study, in which critically ill patients with covid-19 had elevated levels of ldh suggesting an increase in activity and severity of lung injury. ldh is an intracellular enzyme that leaks from damaged cells due to infection and viral replication leading to elevated levels in circulation. recently, liu et al. [34] proposed that increased neutrophil-to-lymphocyte ratio (nlr) can aid in the early prediction of the severity of covid-19 illness. both neutrophils and lymphocytes are critical components of the immune system and play very important role in host defense and clearing infections. lymphopenia, medical condition due to lower number of lymphocytes in the blood, is a typical feature in covid-19 patients, and may be a key factor in disease severity and mortality [35] . in this study, we have used neutrophils and lymphocytes percentage and similar to the previous studies have found that lower percentage of these two quantities were associated with severe covid-19 patients. according to previous research, patients with communityacquired pneumonia have significant immune system activation and/or immune dysfunction leading to changes in these quantities [35] . in addition, on the event of immunosuppression and apoptosis of lymphocytes caused by specific anti-inflammatory cytokines, bone marrow circulates neutrophils [36] , resulting in an increased nlr. however, in contrast to other models, it was observed in this study, both the parameters were small for high-risk patients. lu et al. [37] stated that crp tested upon admission may assist in predicting confirmed or suspected short-term mortality associated with covid-19. crp is an acute phase protein formed by hepatocytes caused by leukocyte-derived cytokines induced by infection, inflammation or tissue damage [38] [39] [40] . similar findings were found in this study where increased crp rates were measured at admission for the high mortality risk covid-19 patients. this indicated that these patients developed a serious lung inflammation or possibly a secondary bacterial infection, and clinical antibiotic treatment might be appropriate for those patients [21] . non-survivors in our study had low lymphocyte and neutrophil percentages, higher age, hscrp and ldh than those of survivors. in addition to the dysregulation of the coagulation system and/ immune system, it can be seen that covid-19 severity was significantly linked to the inflammatory response to the infection. this could lead to other worse medical consequences like ards, septic shock and coagulopathy etc. therefore, this kind of prognostic model will aid in the development of a rational and personalized therapeutic plan for the patients with critical illness. weng et al. [19] recently suggested that age, nlr, ddimer and crp were individual key predictors correlated with death probability. these key-predictors were used to create a nomogram for death prediction due to covid-19. in our research, the five key predictors recorded at admission were chosen by the xgboost feature selection to create a nomogram based prognostic model that exhibits excellent calibration and discrimination in predicting death probability of covid-19 patients. it was also validated by an unseen validation cohort. moreover, it was verified with multiple blood sample data collected from the patients during their hospital stay and the model holds valid for those cases as well. the auc values for development and validation cohort showed a strong distinction of 0.961 and 0.991 respectively using the proposed nomogram, which is, to the best of our knowledge, outperforms any other nomogram based models for covid-19 mortality prediction. in addition, this nomogram-derived lnlca score offered a simple, easy-to-understand and interpretable early detection tool for stratifying the high-risk covid-19 patients at admission and thereby assist their clinical management. covid-19 patients were categorized into three risk groups with varying risk of death using lnlca score measured and calculated at admission. low-risk group cases could be isolated and treated in an isolation center while the moderaterisk patients could be treated isolation ward in a specialized hospital. on the other hand, patients in high-risk group could be under close monitoring and should be moved to critical medical services or icu for urgent treatment if required. this study has scope for further improvement, which will be carried out in the future work. firstly, the study motivates the possibility of research on covid-19 clinical data helping in early mortality prediction but the proposed machine learning method is purely data-driven and may vary if starting from different datasets. the model can be further improved with the help of a larger dataset. secondly, the modelling principle adopted here is to have a minimal number of features for accurate predictions to avoid overfitting, which can be revised with several other models to identify any other sets of best features on a multi-center and multi-country data to produce a generalized model. in summary, based on multiple risk factors (lactate dehydrogenase, neutrophils (%), lymphocytes (%), high sensitive c-reactive protein, and age), our developed nomogram can predict the prognosis of patients with covid-19 with good discrimination and calibration. the model can predict the patient's outcome far ahead of the day of primary clinical outcome with very high accuracy. therefore, the application of lnlca would help clinicians make an efficient and optimized patient stratification management plan without overloading the healthcare resources and also reduce the death with improved and planned response. the authors also plan to further improve the performance of the model with the help of larger dataset with multi-center and multi-country data. the authors declare that they have no conflict of interest. clinical features of patients infected with 2019 novel coronavirus in wuhan, china. the lancet clinical characteristics of coronavirus disease 2019 in china clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in wuhan, china characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: summary of a report of 72 314 cases from the chinese center for disease control and prevention coronavirus disease 2019 (covid-19): situation report-107 coronavirus disease 2019 (covid-19) situation report -68 critical care utilization for the covid-19 outbreak in lombardy, italy: early experience and forecast during an emergency response projecting hospital utilization during the covid-19 outbreaks in the united states estimating the asymptomatic proportion of coronavirus disease 2019 (covid-19) cases on board the diamond princess cruise ship rapid progression to acute respiratory distress syndrome: review of current understanding of critical illness from coronavirus disease 2019 (covid-19) infection prevalence of comorbidities in the novel wuhan coronavirus (covid-19) infection: a systematic review and meta-analysis. international journal of infectious diseases comorbidity and its impact on 1590 patients with covid-19 in china: a nationwide analysis a machine learning-based model for survival prediction in patients with severe covid-19 infection severe outcomes among patients with coronavirus disease 2019 (covid-19)-united states validation of the kuwait progression indicator score for predicting progression of severity in covid19. medrxiv andc: an early warning score to predict mortality risk for patients with coronavirus disease development and external validation of a prognostic multivariable model on admission for hospitalized patients with covid-19 an interpretable mortality prediction model for covid-19 patients clinical characteristics of 82 death cases with covid-19 clinical decision support tool and rapid point-of-care platform for determining disease severity in patients with covid-19 d-dimer levels on admission to predict in-hospital mortality in patients with covid-19 mice vs ppca: missing data imputation in healthcare xgboost: extreme gradient boosting. r package version 04-2 ridge estimators in logistic regression a general-purpose nomogram generator for predictive logistic regression models prognostication in severe acute respiratory syndrome: a retrospective time-course analysis of 1312 laboratoryconfirmed patients in hong kong epidemiological, demographic, and clinical characteristics of 47 cases of middle east respiratory syndrome coronavirus disease from saudi arabia: a descriptive study. the lancet infectious diseases risk factors of fatal outcome in hospitalized subjects with coronavirus disease 2019 from a nationwide analysis in china staging of acute exacerbation in patients with idiopathic pulmonary fibrosis neutrophil-to-lymphocyte ratio predicts severe illness patients with 2019 novel coronavirus in the early stage lymphopenia in severe coronavirus disease-2019 (covid-19): systematic review and meta-analysis an increased alveolar cd4+ cd25+ foxp3+ t-regulatory cell ratio in acute respiratory distress syndrome is associated with increased 30-day mortality acp risk grade: a simple mortality index for patients with confirmed or suspected severe acute respiratory syndrome coronavirus 2 disease (covid-19) during the early stage of outbreak in wuhan predictive factors for pneumonia development and progression to respiratory failure in mers-cov infected patients dynamic changes and diagnostic and prognostic significance of serum pct, hs-crp and s-100 protein in central nervous system infection. experimental and therapeutic medicine high sensitive c-reactive protein: a new marker for urinary tract infection, vur and renal scar ethical approval: this article uses the clinical data which was made publicly available by yan et al. [21] . therefore, the authors of this study were not involved with human participants or animals. however, the original retrospective study carried out by yan et al. [21] was approved by the tongji hospital ethics committee. key: cord-011407-4cjlolp6 authors: cotton‐barratt, owen; daniel, max; sandberg, anders title: defence in depth against human extinction: prevention, response, resilience, and why they all matter date: 2020-01-24 journal: glob policy doi: 10.1111/1758-5899.12786 sha: doc_id: 11407 cord_uid: 4cjlolp6 we look at classifying extinction risks in three different ways, which affect how we can intervene to reduce risk. first, how does it start causing damage? second, how does it reach the scale of a global catastrophe? third, how does it reach everyone? in all of these three phases there is a defence layer that blocks most risks: first, we can prevent catastrophes from occurring. second, we can respond to catastrophes before they reach a global scale. third, humanity is resilient against extinction even in the face of global catastrophes. the largest probability of extinction is posed when all of these defences are weak, that is, by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against. we find that it’s usually best to invest significantly into strengthening all three defence layers. we also suggest ways to do so tailored to the classes of risk we identify. lastly, we discuss the importance of underlying risk factors – events or structural conditions that may weaken the defence layers even without posing a risk of immediate extinction themselves. • future research should identify synergies between reducing extinction and other risks. for example, research on climate change adaptation and mitigation should assess how we can best preserve our ability to prevent, respond to, and be resilient against extinction risks. our framework for discussing extinction risks human extinction would be a tragedy. for many moral views it would be far worse than merely the deaths entailed, because it would curtail our potential by wiping out all future generations and all value they could have produced (bostrom, 2013; parfit, 1984; rees, 2003 rees, , 2018 . human extinction is also possible, even this century. both the total risk of extinction by 2100 and the probabilities of specific potential causes have been estimated using a variety of methods including trend extrapolation, mathematical modelling, and expert elicitation; see rowe and beard (2018) for a review, as well as tonn and stiefel (2013) for methodological recommendations. for example, pamlin and armstrong (2015) give probabilities between 0.00003% and 5% for different scenarios that could eventually cause irreversible civilisational collapse. to guide research and policymaking in these areas, it may be important to understand what kind of processes could lead to our premature extinction. people have considered and studied possibilities such as asteroid impacts (matheny, 2007) , nuclear war (turco et al., 1983) , and engineered pandemics (millett and snyder-beattie, 2017) . in this article we will consider three different ways of classifying such risks. the motivating question behind the classifications we present is 'how might this affect policy towards these risks? ' we proceed by identifying three phases in an extinction process at which people may intervene. for each phase, we ask how people could stop the process, because the different failure modes may be best addressed in different ways. for this reason we do not try to classify risks by the kind of natural process they represent, or which life support system they undermine (unlike e.g. avin et al., 2018) . an event causing human extinction would be unprecedented, so is likely to have some feature or combination of features that is without precedent in human history. now, we see events with some unprecedented property all of the timewhether they are natural, accidental, or deliberateand many of these will be bad for people. however, a large majority of those pose essentially zero risk of causing our extinction. why is it that some damaging processes pose risks of extinction, but many do not? by understanding the key differences we may be better placed to identify new risks and to form risk management strategies that attack their causes as well as other factors behind their destructive potential. we suggest that much of the difference can usefully be explained by three broad defence layers ( figure 1 ): 1. first layer: prevention. processesnatural or humanwhich help people are liable to be recognised and scaled up (barring defeaters such as coordination problems). in contrast processes which harm people tend to be avoided and dissuaded. in order to be bad for significant numbers of people, a process must either require minimal assistance from people, or otherwise bypass this avoidance mechanism. 2. second layer: response. 1 if a process is recognised to be causing great harm (and perhaps pose a risk of extinction), people may cooperate to reduce or mitigate its impact. in order to cause large global damage, it must impede this response, or have enough momentum that there is nothing people can do. 3. third layer: resilience. people are scattered widely over the planet. some are isolated from external contact for months at a time, or have several years' worth of stored food. even if a process manages to kill most of humanity, a surviving few might be able to rebuild. in order to cause human extinction, a catastrophe must kill everybody, or prevent a long-term recovery. the boundaries between these different types of risk-reducing activity aren't crisp, and one activity may help at multiple stages. but it seems that often activities will help primarily at one stage. we characterise prevention as reducing the likelihood that catastrophe strikes at all; it is necessarily done in advance. we characterise response as reducing the likelihood that a catastrophe becomes a severe global catastrophe (at the level which might threaten the future of civilisation). this includes reducing the impact of the catastrophe after it is causing obvious and significant damage, but the response layer might also be bolstered by mitigation work which is done in advance. finally, we characterise resilience as reducing the likelihood that a severe global catastrophe eventually causes human extinction. 2 successfully avoiding extinction could happen at each of these defence layers. in the rest of the article we explore two consequences of this. first, we can classify damaging processes by the way in which we could stop them at the defence layers. in section 2, we'll look at a classification of risks by their origin: understanding different ways in which we could succeed at the prevention layer. in section 3, we'll look at the features which may allow us to block them at the response layer. in section 4, we'll classify risks by the way in which we could stop them from finishing everybody. we conclude each section by policy implications. each risk will thus belong to three classesone per defence layer. for example, consider a terrorist group releasing an engineered virus that grows into a pandemic and eventually kills everyone. in our classification, we'll call this prospect a malicious risk with respect to its origin; a cascading risk with respect to its scaling mechanism of becoming a global catastrophe; and a vector risk in the last phase we've called endgame. we'll present more examples at the end of section 4 and in table 1 . second, we present implications of our framework distinguishing three layers. in section 5, we discuss how to allocate resources between the three defence layers, concluding that in most cases all of prevention, response, and resilience should receive substantial funding and attention. in section 6, we highlight that risk management, in addition to monitoring specific hazards, must protect its defence layers by fostering favourable structural conditions such as good global governance. avin et al. (2018) have recently presented a classification of risks to the lives of a significant proportion of the human population. they classify such risks based on 'critical systems affected, global spread mechanism, and prevention and mitigation failure'. our framework differs from theirs in two major ways. first, with extinction risks we focus on a more narrow type of risk. this allows us, in section 4, to discuss what might stop global catastrophes from causing extinction, a question specific to extinction risks. second, even where the classifications cover the same temporal phase of a global catastrophe, they are motivated by different questions. avin et al. attempt a comprehensive survey of the natural, technological, and social systems that may be affected by a disaster, for example listing 45 critical systems in their second section. by contrast, we ask why a risk might break through a defence layer, and look for answers that abstract away from the specific system affected. for instance, in section 2, we'll distinguish between unforeseen, expected but unintended, and intended harms. we believe the two classifications complement each other well. avin and colleagues' (2018) discussion of prevention and response failures is congenial to our section 6 on underlying risk factors. their extensive catalogues of critical systems, spread mechanisms and prevention failures highlight the wide range of relevant scientific disciplines and stakeholders, and can help identify fault points relevant to particularly many risks. conversely, we hope that our coarser typology can guide the search for additional critical systems and spread mechanisms. we believe that our classification also usefully highlights different ways of protecting the same systems. for example, the risks from natural and engineered pandemics might best be reduced by different policy levers even if both affected the same critical systems and spread by the same mechanisms. lastly, our classification can help identify risk management strategies that would reduce whole clusters of risks. for example, restricting access to dangerous information may prevent many risks from malicious groups, irrespective of the critical system that would be targeted. our classification also overlaps with the one by liu et al. (2018) , for example when they distinguish intended from other vulnerabilities or emphasise the importance of resilience. while the classifications otherwise differ, we believe ours contributes to their goal to dig 'beyond hazards' and surface a variety of intervention points. both the risks discussed by avin et al. (2018) and extinction risks by definition involve risks of a massive loss of lives. this sets them apart from other risks where the adverse outcome would also have global scale but could be limited to less severe damage such as economic losses. such risks are being studied by a growing literature on 'global systemic risk' (centeno et al., 2015) . rather than reviewing that literature here, we'll point out throughout the article where we believe it contains useful lessons for the study of extinction risks. finally, it's worth keeping in mind that extinction is not the only outcome that would permanently curtail humanity's potential; see bostrom (2013) for other ways in which this could happen. a classification of these other existential risks is beyond the scope of this article, as is a more comprehensive survey of the large literature on global risks (e.g. baum and barrett, 2018; baum and handoh, 2014; bostrom and cirkovi c 2008; posner, 2004) . avoiding catastrophe altogether is the most desirable outcome. the origin of a risk determines how it passes through the prevention layer, and hence the kind of steps society can take to strengthen prevention ( figure 2 ). the simplest explanation for a risk to bypass our background prevention of harm-creating activities is if the origin is outside of human control: a natural risk. examples include a large enough asteroid striking the earth, or a naturally occurring but particularly deadly pandemic. we sometimes can take steps to avoid natural risks. for example, we may be able to develop methods for deflecting asteroids. preventing natural risks generally requires proactive understanding and perhaps detection, for instance scanning for asteroids on earth-intersecting orbits. such risks share important properties with anthropogenic risks, as any explanation for how they might materialise must include an explanation of why the human-controlled prevention layer failed. all non-natural risks are in some sense anthropogenic, but we can classify them further. some may have a localised origin, needing relatively small numbers of people to trigger them. others require large-scale and widespread activity. in each case there are at least a couple of ways that it could get through the prevention layer. note that there is a spectrum in terms of the number of people who are needed to produce different risks, so the division between 'few people' and 'many people' is not crisp. we might think of the boundary as being around one hundred thousand or one million people, and things close to this boundary will have properties of both classes. however, it appears to us that for many of the plausible risks the number required is either much smaller (e.g., an individual or a cohesive group of people such as a company or military unit) or much larger than this (e.g., the population of a major power or even the whole world), so the qualitative distinction between 'few people' and 'many people' (and the different implications of these for responding) seems to us a useful one. also potentially relevant are the knowledge and intentions of the people conducting the risky activity. they may anthropogenic risks from small groups the case of a risk where relatively few people are involved in triggering and they are unaware of the potential harm is an unseen risk. 4 this is likely to involve a new kind of activity; it is most plausible with the development of unprecedented technologies (gpp, 2015) , such as perhaps advanced artificial intelligence (bostrom, 2014) , nanotechnology (auplat, 2012 (auplat, , 2013 umbrello and baum, 2018) , or high-energy physics experiments (ord et al., 2010) . the case of a localised unintentional trigger which was foreseen as a possibility (and the dynamics somewhat understood) is an accident risk. this could include a nuclear war starting because of a fault in a system or human error, or the escape of an engineered pathogen from an experiment despite safety precautions. if the harm was known and intended, we have a malicious risk. this is a scenario where a small group of people wants to do widespread damage; 5 see torres (2016 torres ( , 2018b for a typology and examples. malicious risks tend to be extreme forms of terrorism, where there is a threat which could cause global damage. turning to scenarios where many people are involved, we ask why so many would pursue an activity which causes global damage. perhaps they do not know about the damage. this is a latent risk. for them to remain ignorant for long enough, it is likely that the damage is caused in an indirect or delayed manner. we have seen latent risks realised before, but not ones that threatened extinction. for example, asbestos was used in a widespread manner before it was realised that it caused health problems. and it was many decades after we scaled up the burning of fossil fuels that we realised this contributed to climate change. if our climate turns out to be more sensitive than expected (nordhaus, 2011; wagner and weitzman, 2015; weitzman, 2009) , and continued fossil fuel use triggers a truly catastrophic shift in climate, then this could be a latent risk today. in some cases people may be aware of the damage and engage in the activity anyway. this failure to internalise negative externalities is typified by 'tragedy of the commons' scenarios, so we can call this a commons risk. for example, failure to act together to tackle global warming may be a commons risk (but lack of understanding of the dynamics causes a blur with latent risk). in general, commons risks require some coordination failure. they are therefore more likely if features of the risk inhibit coordination; see for example barrett (2016) and sandler (2016) for a game-theoretic analysis of such features. finally, there are cases where a large number of people engage in an activity to cause deliberate harm: conflict risk. this could include wars and genocides. wars share some features with commons risk: there are solutions which are better for everybody but are not reached. in most conflicts, actors are intentionally causing harm, but only as an instrumental goal. in the above we classify risks according to who creates the risk and their state of knowledge. we have done this because if we want to prevent risk it will often be most effective to go to the source. but we could also ask who is in a position to take actions to avoid the risk. in many cases those creating it have most leverage, but in principle almost any actor could take steps to reduce the occurrence rate. if risk prevention is underprovided, this is likely to be a tragedy of the commons scenario, and share characteristics with commons risk. from a moral and legal standpoint intentionality often matters. the possibility of being found culpable is an important incentive for avoiding risk-causing activities and part of risk management in most societies. if creating or hiding potential catastrophic risks is made more blameworthy, prevention will likely be more effective. unfortunately it also often motivates concealment that can create or aggravate risk; see chernov and sornette (2015) for case studies of how this misincentive can weaken prevention and response. this shows the importance of making accountability effectively enforceable. • to be able to prevent natural risks, we need research aimed at identifying potential hazards, understanding their dynamics, and eventually develop ways to reduce their rate of occurrence. • to avoid unseen and latent risks, we can promote norms such as appropriate risk management principles at institutions that engage in plausibly risky activities; note that there is an extensive literature on rivalling risk management principles (e.g. foster et al., 2000; o'riordan and cameron, 1994; sandin, 1999; sunstein, 2005; wiener, 2011) , especially in the face of catastrophic risks (baum, 2015; bostrom, 2013; buchholz and schymura, 2012; sunstein, 2007 sunstein, , 2009 tonn, 2009; tonn and stiefel, 2014 )advocating for any particular principle is beyond the scope of this article. see also jebari (2015) for a discussion of how heuristics from engineering safety may help prevent unseen, latent, and accident risks. regular horizon scanning may identify previously unknown risks, enabling us to develop targeted prevention measures. organisations must be set up in such a way that warnings of newly discovered risks reach decision-makers (see clarke and eddy, 2017 , for case studies where this failed). • accidents may be prevented by general safety norms that also help reduce unseen risk. in addition, building on our understanding of specific accident scenarios, we can design failsafe systems or follow operational routines that minimise accident risk. in some cases, we may want to eschew an accident-prone technology altogether in favour of safer alternatives. accident prevention may benefit from research on high reliability organisations (roberts and bea, 2001 ) and lessons learnt from historical accidents. where effective prevention measures have been identified, it may be beneficial to codify them through norms and law at the national and international levels. alternatively, if we can internalise the expected damages of accidents through mechanisms such as insurance, we can leverage market incentives. 6 • solving the coordination problems at the heart of commons and conflict risks is sometimes possible by fostering national or international cooperation, be it through building dedicated institutions or through establishing beneficial customs. 7 one idea is to give a stronger political voice to future generations (jones et al., 2018; tonn, 1991 tonn, , 2018 . • lastly, we can prevent malicious risks by combating extremism. technical (trask, 2017) as well as institutional (lewis, 2018) innovations may help with governance challenges in this area, a survey of which is beyond the scope of this article. • note that our classification by origin is aimed at identifying policies that wouldif successfully implementedreduce a broad class of risks. developing policy solutions is, however, just one step toward effective prevention. we must then also actually implement themwhich may not happen due to, for example, free-riding incentives. our classification does not speak to this implementation step. avin et al. (2018) congenially address just this challenge in their classification of prevention and mitigation failures. classification by scaling mechanism: types of response failure for a catastrophe to become a global catastrophe, it must eventually have large effects despite our response aimed at stopping it. to understand how this can happen, it's useful to look at the time when we could first react. effects must then either already be large or scale up by a large factor afterwards ( figure 3 ). if the initial effects are large, we will simply say that the risk is large. if not, we can look at the scaling process. if massive scaling happens in a small number of steps, we say there is leverage in play. if scaling in all steps is moderate, there must be quite a lot of such stepsin this case we say that the risk is cascading. paradigm examples of catastrophes of an immediately global scale are large sudden-onset natural disasters such as asteroid strikes. since we cannot respond to them at a smaller-scale stage, mitigation measures we can take in advance (part of the second defence layer as they would reduce damage after it has started) and the other defence layers of prevention and resilience are particularly important to reduce such risks. prevention and mitigation may benefit from detecting a threatsay, an asteroidearly, but in our classification this is different from responding after there has been some actual small-scale damage. leverage points for rapid one-step scaling can be located in natural systems, for example if the extinction of a key species caused an ecosystem to collapse. however, it seems to us that leverage points are more common in technological or social systems that were designed to concentrate power or control. risks of both natural and anthropogenic origin may interact with such systems. for instance, a tsunami triggered the 2011 disaster at the fukushima daiichi nuclear power plant. anthropogenic examples include nuclear war (possible to trigger by a few individuals linked to a larger chain of command and control) or attacks on weak points in key global infrastructure. responding to leverage risks is challenging because there are only few opportunities to intervene. on the other hand, blocking even one step of leveraged growth would be highly impactful. this suggests that response measures may be worthwhile if they can be targeted at the leverage points. with the major exception of escalating conflicts, cascading risks normally cascade in a way which does not rely on humans deciding to further the effects. a typical example is the self-propagating growth of an epidemic. as automation becomes more widespread, there will be larger systems without humans in the loop, and thus perhaps more opportunities for different kinds of cascading risk. since cascading risks are those which have a substantial amount of growing effects after we're able to interact with defence in depth them, it seems likely that they will typically give us more opportunities to respond, and that response will therefore be an important component of risk reduction. for risks which cascade exponentially (such as epidemics), an earlier response may be much more effective than a later one. reducing the rate of propagation is also effective if there exist other interventions that can eventually stop or revert the damage. however, there are a few secondary risk-enabling properties that can weaken the response layer and therefore help damage cascade to a global catastrophe which we could have stopped. for example, a cascading risk may: • impede cooperation: by preventing a coordinated response, the likelihood of a global catastrophe is increased. cooperation is harder when communication is limited, when it is hard to observe defection, or when there is decreased trust. • not obviously present a risk: the longer a cascading risk is under-recognised, the more it can develop before any real response. for example, long-incubation pathogens can spread further before their hazard becomes apparent. • be on extreme timescales: if the risk presents and cascades very fast, there is little opportunity for any response. johnson et al. (2012) analyse such 'ultrafast' events, using rapid changes in stock prices driven by trading algorithms as an example (braun et al., 2018 , however find that most of these 'mini flash crashes' are dominated by a single large order rather than being the result of a cascade). note, however, that which timescales count as relevantly 'fast' depends on our response capabilitiestechnological and institutional progress may result in faster-cascading threats but also in opportunities to respond faster. on the other hand people may be bad at addressing problems that won't manifest for generations, as is the case for some impacts of global warming. policy implications for responding to extinction risk • by their nature, we cannot respond to large risks before they become a global catastrophe. of particular importance for such risks are therefore: mitigation that can be done in advance, and the defence layers of prevention and resilience. • leverage risks provide us with the opportunity of a leveraged response: we can identify leverage points in advance and target our responses at them. • while the details of responses to cascading risks must be tailored to each specific case, we can highlight three general recommendations. first, detect damage early, when a catastrophe is still easy to contain. second, reduce the time lag between detection and response, for example, by continuously maintaining response capabilities and having rapidly executable contingency plans in place. third, ensure that planned responses won't be stymied by the cascading process itselffor example, don't store contingency plans for how to respond to a power outage on computers. 8 for a global catastrophe to cause human extinction, it must in the end stop the continued survival of the species. this could be direct: killing everyone; 9 or indirect: removing our ability to continue flourishing over a longer period (figure 4 ). in order to kill everyone, the catastrophe must reach everyone. we can further classify direct risks by how they reach everyone. the simplest way this could happen is if it is everywhere that people are or could plausibly be: a ubiquity risk. if the entire planet is struck by a deadly gamma ray burst, or enough of a deadly toxin is dispersed through the atmosphere, this could plausibly kill everyone. if it doesn't reach everywhere people might be, a direct risk must at least reach everywhere that people in fact are. this might occur when people have carried it along with them: a vector risk. this includes risk from pandemics (if they are sufficiently deadly and have a long enough incubation period that it is spread everywhere) or perhaps risks which are spread by memes (dawkins, 1976) , or which come from some technological artefacts which we carry everywhere. note that to directly cause extinction, a vector would need to impact hard-to-reach populations including 'disaster shelters, people working on submarines, and isolated peoples' (beckstead, 2015a, p. 36) . if not ubiquitous and not carried with the people, we would have to be extraordinarily unlucky for it to reach everyone by chance. setting this aside as too unlikely, we are left with agency risk: deliberate actors trying to reach everybody. the actors could be humans or nonhuman global policy (2020) intelligence (perhaps machine intelligence or even aliens). agency risk probably means someone deliberately trying to ensure nobody survives, which may make it easier to get through the resilience layer by allowing anticipation of and response to possible survival plans. in principle agency risk includes cases where someone is deliberately trying to reach everyone, and only by accident does so in a way that kills them. if the risk threatens extinction without killing everyone, it must reduce our long-term ability to survive as a species. this could include a very broad range of effects, but we can break them up according to the kind of ability it impedes. habitat risks make long-term survival impossible by altering or destroying the environment we live in so that it cannot easily support human life. for example a large enough asteroid impact might throw up dust which could prevent us from growing food for many yearsif this was long enough, it could lead to human extinction. alternatively an environmental change which lowered the average number of viable offspring to below replacement rates could pose a habitat risk. capability risks knock us back in a way that permanently remove an important societal capability, leading in the long run to extinction. one example might be moving to a social structure which precluded the ability to adapt to new circumstances. we are gesturing towards a distinction between habitat risks and capability risks, rather than drawing a sharp line. habitat risks work through damage to an external environment, where capability risks work through damage to more internal social systems (or even biological or psychological factors). capability risks are also even less direct than habitat risks, perhaps taking hundreds or thousands of years to lead to extinction. indeed there is not a clear line between capability risks and events which damage our capabilities but are not extinction risks (cf. section 6). nonetheless when considering risks of human extinction it may be important to account for events which could cause the loss of fragile but important capabilities. an important type of capability risk may be civilisational collapse. it is possible that killing enough people and destroying enough infrastructure could lead to a collapse of civilisation without causing immediate extinction. if this happens, it is then plausible that it might never recover, or recover in a less robust form, and be wiped out by some subsequent risk. it is an open and important question how likely this permanent loss of capability is (beckstead, 2015b) . if it is likely, the resilience layer may therefore be particularly important to reinforce, perhaps along the lines proposed by maher and baum (2013) . on the other hand, if even large amounts of destruction have only small effects on the chances of eventual extinction, it becomes more important to focus on risks which can otherwise get past the resilience layer. we finally illustrate our completed classification scheme by applying it to examples, which we summarise in table 1 . throughout the text, we've repeatedly referred to an asteroid strike that might cause extinction due to an ensuing impact winter. we've called this a natural risk regarding its origin; a large risk regarding scale, with no opportunity to intervene between the asteroid impact and its damage affecting the whole globe; and, if we assume that humanity dies out because climatic changes remove the ability to grow crops, a habitat risk in the endgame phase. our next pair of examples illustrates that risks with the same salient central mechanismin this case nuclear warmay well differ during other phases. consider first a nuclear war precipitated by a malfunctioning early warning system that is, a nuclear power launching what turns out to be a first strike because it falsely believed that its nuclear destruction was imminent. suppose further that this causes a nuclear winter, leading to human extinction. this would be an accident that scales via leverage, and finally manifests as a habitat risk. contrast this with the intentional use of nuclear weapons in an escalating conventional war, and assume further that this either doesn't cause a nuclear winter or that some humans are able to survive despite adverse climatic conditions. instead, humanity never recovers from widespread destruction, and is eventually wiped out by some other catastrophe that could have easily been avoided by a technologically advanced civilisation. this second scenario would be a conflict that again scaled via the leverage associated with nuclear weapons, but then finished off humanity by removing a crucial capability rather than via damage to its habitat. we close by applying our classification to a more speculative risk we might face this century. some scholars (e.g. bostrom, 2014) have warned that progress in artificial intelligence (ai) could at some point allow unforeseen rapid self-improvement in some ai system, perhaps one that uses machine learning and can autonomously acquire additional training data via sensors or simulation. the concern is that this could result in a powerful ai agent that deliberately wipes out humanity to pre-empt interference with its objectives (see omohundro, 2008 , for an argument why such preemption might be plausible). to the extent that we currently don't know of any machine learning algorithms that could exhibit such behaviour, this would be an unseen risk; the scaling would be via leverage if we assume a discrete algorithmic improvement as trigger, or alternatively the risk could be rapidly cascading; in the endgame, this scenario would present an agency risk. • to guard against what today would be ubiquity risks, we may in the future be able to establish human settlements on other planets (armstrong and sandberg, 2013) . 10 • vector risks may not reach people in isolated and self-sufficient communities. establishing disaster shelters may hence be an attractive option. self-sufficient shelters can also reduce habitat risk. jebari (2015) discusses how to maximise the resilience benefits from shelters, while beckstead (2015a) has argued that their marginal effect would be limited due to the presence of isolated peoples, submarine crews, and existing shelters. • resilience against vector and agency risks may be increased by late-stage response measures that work even in the event of widespread damage to infrastructure and the breakdown of social structure. an example might be the 'isolated, self-sufficient, and continuously manned underground refuges' suggested by jebari (2015, p. 541 ). in this section we will use our guiding idea of three defence layers to present a way of calculating the extinction probability posed by a given risk. we'll draw three high-level conclusions: first, the most severe risks are those which have a high probability of breaking through all three defence layers. second, when allocating resources between the defence layers, rather than comparing absolute changes in these probabilities we should assess how often we can halve the probability of a risk getting through each layer. third, it's best to distribute a sufficiently large budget across all three defence layers. we are interested in the probability p that a given risk r will cause human extinction in a specific timeframe, say by 2100. whichever three classes r belongs to, in order to cause extinction it needs to get past all three defence layers; its associated extinction probability p is therefore equal to the product of three factors: 1. the probability c for r getting past the first barrier and causing a catastrophe; 2. the conditional probability g that r gets past the second barrier to cause a global catastrophe, given that it has passed the first barrier; and 3. the conditional probability e that r gets past the third barrier to cause human extinction, given that it has passed the second barrier. in short: p = cágáe. each of c, g, and e can get extremely small for some risks. but the extinction probability p will be highest when all three terms are non-negligible. hence we get our (somewhat obvious) first conclusion that the most concerning risks are those which can plausibly get past all three defence layers. however, most concerning doesn't necessarily translate into the most valuable to act on. suppose we'd like to invest additional resources into reducing risk r. we could use them to strengthen either of the three defences, which would make it less likely that r passes that defence. we should then compare relative rather than absolute changes to these probabilities, which is our second conclusion. that is, to minimise the extinction probability p we should ask which of c, g, and e we can halve most often. this is because the same relative change of each probability will have the same effect on the extinction probability phalving either of c, g, or e will halve p. by contrast, the effect of the same absolute change will vary depending on the other two probabilities; for instance, reducing c by 0.1 reduces p by 0.1ágáe. in particular, a given absolute change will be more valuable if the other two probabilities are large. when one of c, g, or e is close to 100%, it may be much harder to reduce it to 50% than it would be to halve a smaller probability. the principle of comparing how often we can halve c, g, and e then implies that we're better off reducing probabilities not close to 100%. for example, consider a large asteroid striking the earth. we could take steps to avoid it (for example by scanning and deflecting), and we could take steps to increase our resilience (for example by securing food production). but if a large asteroid does cause a catastrophe, it seems very likely to cause a global catastrophe, and it is unclear that there is much to be done in reducing the risk at the scaling stage. in other words, the probability g is close to 1 and prohibitively hard to substantially reduce. we therefore shouldn't invest resources into futile responses, but instead use them to strengthen both prevention and resilience. what if each defence layer has a decent chance of stopping a risk? we'll then be best off by allocating a non-zero chunk of funding to all three of thema strategy of defence in depth, our third conclusion. the reason just is the familiar phenomenon of diminishing marginal returns of resources. it may initially be best to strengthen a particular layerbut once we've taken the low-hanging fruit there, investing in another layer (or in reducing another risk) will become equally cost-effective. of course, our budget might be exhausted earlier. defending in depth therefore tends to be optimal if and only if we can spend relatively much in total. we close by discussing some limitations of our analysis. first, we remain silent on the optimal allocation of resources between different risks (rather than between different layers for a fixed risk or basket of risks); indeed, as we'll argue in section 6, comprehensively answering the question of how to optimally allocate resources intended for extinction risk reduction requires us to look beyond even the full set of extinction risks. we do hope that our work could prove foundational for further research that investigates both the allocation between risks and between defence layers simultaneously. indeed, it would be straightforward to consider several risks p i = c i ág i áe i , i = 1, . . ., n; assuming specific functional forms for how the probabilities c i , g i , and e i change in response to invested resources could then yield valuable insights. second, we have not considered interactions between different defence layers or different risks (graham et al., 1995; baum, 2019; baum and barrett, 2017; martin and pindyck, 2015) . these can present both as tradeoffs or synergies. for example, traffic restrictions in response to a pandemic might slow down research on a treatment that would render the disease non-fatal, thus harming the resilience layer; on the other hand, they may inadvertently help with preventing malicious risk or being resilient against agency risk. • the most important extinction risks to act on are those that have a non-negligible chance of breaking through all three defence layersrisks where we have a realistic chance of failing to prevent, a realistic chance of failing to successfully respond to, and a realistic chance of failing to be resilient against. • due to diminishing marginal returns, when budgets are high enough it will often be best to maintain a portfolio of significant investment into each of prevention, response, and resilience. in sections 2-4 we have considered ways of classifying threats that may cause human extinction and the pathways through which they may do so. our classification was based on the three defence layers of prevention, response, and resilience. giving centre stage to the defence layers provides the following useful lens for extinction risk management. if our main goal is to reduce the likelihood of extinction, we can equivalently express this by saying that we should aim to strengthen the defence layers. indeed, extinction can only become less likely if at least one particular extinction risk is made less likely; in turn this requires that it has a smaller chance of making it past at least one of the defence layers. this is significant because there is a spectrum of ways to improve our defences depending on how narrowly our measures are tailored to specific risks. at one extreme, we can increase our capacity to prevent, respond to, or be resilient against one risk; for example, we can research methods to deflect asteroids. in between are measures to defend against a particular class of risk, as we've highlighted in our policy recommendations. at the other extreme is the reduction of underlying risk factors that weaken our capacity to defend against many classes of risks. risk factors need not be associated with any potential proximate cause of extinction. for example, consider regional wars; even when they don't escalate to a global catastrophe, they could hinder global cooperation and thus impede many defences. global catastrophes constitute one important type of risk factor. we already discussed the possibility of them making earth uninhabitable or removing a capability that would be crucial for long-term survival. but even if they do neither of these, they can severely damage our defence layers. in particular, getting hit by a global catastrophe followed in short succession by another might be enough to cause extinction when neither alone would have done so. there are significant historic examples of such compound risks below the extinction level. for instance, the deadliest accident in aviation history occurred when two planes collided on an airport runway; this was only possible because a previous terrorist attack on another airport had caused congestion due to rerouted planes, which disabled the prevention measure of using separate routes for taxiing and takeoff (weick, 1990) . when considering catastrophes we should therefore pay particular attention to negative impacts they may have on the defence layers. our capacity to defend also depends on various structural properties that can change in gradual ways even in the absence of particularly conspicuous events. for example, the resilience layer may be weakened by continuous increases in specialisation and global interdependence. this can be compared with the model of synchronous failure suggested by homer-dixon et al. (2015) . they describe how the slow accumulation of multiple simultaneous stresses makes a system vulnerable to a cascading failure. it is beyond the scope of this article to attempt a complete survey of risk factors; we merely emphasise that they should be considered. we do hope that our classifications in sections 2-4 may be helpful in identifying risk factors. for example, thinking about preventing conflict and common risks may point us to global governance, while having identified vector and agency risks may highlight the importance of interdependence (even though, upon further scrutiny, these risk factors turn out to be relevant for many other classes of risk as well). we conclude that the allocation of resources between layers defending against specific risks, which we investigated in section 2, is not necessarily the most central task of extinction risk management. it is an open and important question whether reducing specific risks, clusters of risks, or underlying risk factors is most effective on the margin. the study and management of extinction risks are challenging for several reasons. cognitive biases make it hard to appreciate the scale and probability of human extinction (wiener, 2016; yudkowsky, 2008) . most potential people affected are in future generations, whose interests aren't well represented in our political systems. hazards can arise and scale in many different ways, requiring a variety of disciplines and stakeholders to understand and stop them. and since there is no precedent for human extinction, we struggle with a lack of data. faced with such difficult terrain, we have considered the problem from a reasonably high level of abstraction; we hope thereby to focus attention on the most crucial aspects. if this work is useful, it will be as a foundation for future work or decisions. in some cases our classification might provoke thoughts that are helpful directly for decision-makers that engage with specific risks. however, we anticipate that our work will be most useful in informing the design of systems for analysing and prioritising between several extinction risks, or in informing the direction of future research. data sharing is not applicable to this article as no new data were created or analysed. notes 1 we are particularly indebted to toby ord for several very helpful comments and conversations. we also thank scott janzwood, sebastian farquhar, martina kunz, huw price, se an o h eigeartaigh, shahar avin, the audience at a seminar at cambridge's centre for the study of existential risk (cser), and two anonymous reviewers for helpful comments on earlier drafts of this article. we're also grateful to eva-maria nag for comments on our policy suggestions. the contributions of owen cotton-barratt and anders sandberg to this article are part of a project that has received funding from the european research council (erc) under the european union's horizon 2020 research and innovation programme (grant agreement no 669751). 1. in the terminology of the united nations office for disaster risk reduction (undrr, 2016), response denotes the provision of emergency services and public assistance during and immediately after a disaster. in our usage, we include any steps which may prevent a catastrophe scaling to a global catastrophe. this could include work traditionally referred to as mitigation. 2. the concept of resilience, originally coined in ecology (holling, 1973) , today is widely used in the analysis of risks of many types (e.g. folke et al., 2010) . in undrr (2016) terminology, resilience refers to '[t]he ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.' in this article, we usually use resilience to specifically denote the ability of humanity as a whole to recover from a global catastrophe in a way that enables its long-term survival. this ability may in turn depend on the resilience of many smaller natural, technical, and socio-ecological systems. 3. strictly knowledge and intentionality are two separate dimensions; however it is essentially impossible to intend the harm without being aware of the possibility, so we treat it as a spectrum with ignorance at one end, intent at the other end, and knowledge without intent in the middle. again, there is some blur between these: there are degrees of awareness about a risk, and an intention of harm may be more or less central to an action. 4. there are degrees of lack of foresight of the risk. cases where the people performing the activity are substantially unaware of the risks have many of the relevant features of this category, even if they have suspicions about the risks, or other people are aware of the risks. 5. they may not intend for that damage to cause human extinctionfor the purposes of acting on this classification it's more useful to know whether they were trying to cause harm. 6. we thank an anonymous reviewer for suggesting the policy responses of avoiding dangerous technologies and mandating insurance. 7. global coordination more broadly may however be a double edged tool, since increased interdependency if not well managed can also increase the chance of systemic risks (goldin & mariathasan, 2014) . 8. we thank an anonymous reviewer for suggesting both the third general recommendation and the example. 9. what about a risk that directly kills, say, 99.9999% of people? technically this poses only an indirect risk, since to cause extinction it needs to remove the capability of the survivors to recover. however, if the proportion threatened is high enough then we can reason that it must also have a way of reaching essentially everyone, so the analysis of direct risks will also be relevant. 10. some scholars have argued that humanity expanding into space would increase other risks; see for example an interview (deudney, n.d.) and an upcoming book (deudney, forthcoming) by political scientist daniel deudney and torres (2018a) . assessing the overall desirability of space colonisation is beyond the scope of this article. eternity in six hours: intergalactic spreading of intelligent life and sharpening the fermi paradox the challenges of nanotechnology policy making part 1. discussing mandatory frameworks the challenges of nanotechnology policy making part 2. discussing voluntary frameworks and options classifying global catastrophic risks collective action to avoid catastrophe: when countries succeed, when they fail, and why risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats risk-risk tradeoff analysis of nuclear explosives for asteroid deflection', risk analysis towards an integrated assessment of global catastrophic risk global catastrophes: the most extreme risks integrating the planetary boundaries and global catastrophic risk paradigms how much could refuges help us recover from a global catastrophe? the long-term significance of reducing global catastrophic risks', the givewell blog existential risk prevention as global priority superintelligence: paths, dangers, strategies global catastrophic risks impact and recovery process of mini flash crashes: an empirical study expected utility theory and the tyranny of catastrophic risks the emergence of global systemic risk man-made catastrophes and risk information concealment: case studies of major disasters and human fallibility warnings: finding cassandras to stop catastrophes the selfish gene an interview with daniel deudney forthcoming) dark skies: space expansionism, planetary geopolitics, and the ends of humanity resilience thinking: integrating resilience, adaptability and transformability science and the precautionary principle the butterfly defect: how globalization creates systemic risks, and what to do about it policy brief: unprecedented technological risks resilience and stability of ecological systems synchronous failure: the emerging causal architecture of global crisis existential risks: exploring a robust risk reduction strategy financial black swans driven by ultrafast machine ecology representation of future generations in united kingdom policy-making horsepox synthesis: a case of the unilateralist's curse? governing boring apocalypses: a new typology of existential vulnerabilities and exposures for existential risk research adaptation to and recovery from global catastrophe', sustainability averting catastrophes: the strange economics of scylla and charybdis reducing the risk of human extinction existential risk and cost-effective biosecurity the economics of tail events with an application to climate change the basic ai drives probing the improbable: methodological challenges for risks with low probabilities and high stakes interpreting the precautionary principle global challenges: 12 risks that threaten human civilization reasons and persons catastrophe: risk and response our final hour: a scientist's warning: how terror, error, and environmental disaster threaten humankind's future in this century -on earth and beyond on the future: prospects for humanity must accidents happen? lessons from high-reliability organizations probabilities, methodologies and the evidence base in existential risk assessments. working paper, centre for the study of existential risk dimensions of the precautionary principle strategic aspects of difficult global challenges global policy (2020) © 2020 the authors laws of fear: beyond the precautionary principle the catastrophic harm precautionary principle', issues in legal scholarship worst-case scenarios the court of generations: a proposed amendment to the us constitution obligations to future generations and acceptable risks of human extinction philosophical, institutional, and decision making frameworks for meeting obligations to future generations evaluating methods for estimating existential risks human extinction risk and uncertainty: assessing conditions for action agential risks: a comprehensive introduction space colonization and suffering risks: reassessing the "maxipok rule agential risks and information hazards: an unavoidable but dangerous topic?', futures, 95 safe crime prediction: homomorphic encryption and deep learning for more effective nuclear winter: global consequences of multiple nuclear explosions evaluating future nanotechnology: the net societal impacts of atomically precise manufacturing report of the open-ended intergovernmental expert working group on indicators and terminology relating to disaster risk reduction climate shock: the economic consequences of a hotter planet the vulnerable system: an analysis of the tenerife air disaster on modeling and interpreting the economics of catastrophic climate change the rhetoric of precaution the tragedy of the uncommons: on the politics of apocalypse cognitive biases potentially affecting judgment of global risks owen cotton-barratt is a mathematician at the future of humanity institute, university of oxford. his research concerns high-stakes decision-making in cases of deep uncertainty, including normative uncertainty, future technological developments, unprecedented accidents, and untested social responses.max daniel is a senior research scholar at the future of humanity institute, university of oxford. his research interests include existential risks, the governance of risks from transformative artificial intelligence, and foundational questions regarding our obligations and abilities to help future generations.anders sandberg is a senior research fellow at the future of humanity institute, university of oxford. his research deals with the management of low-probability high-impact risks, societal and ethical issues surrounding human enhancement, estimating the capabilities of future technologies, and very long-range futures. key: cord-023473-ofwdzu5t authors: tan, wei‐jiat; enderwick, peter title: managing threats in the global era: the impact and response to sars date: 2006-06-26 journal: nan doi: 10.1002/tie.20107 sha: doc_id: 23473 cord_uid: ofwdzu5t in early 2003, the sars virus brought disruption of public and business activities in many areas of the world, particularly asia. as a result of its impact, sars quickly established itself as a new kind of global uncertainty and posed challenges for traditional methods of risk management. this article examines the impact that sars has had through means of a case study and builds on this to provide recommendations for how uncertainty may be managed in an increasingly globalized world. reconsideration of strategic and risk‐management approaches have become necessary. supply‐chain management and corporate strategy require a fundamental rethink to balance the pursuit of efficiency with increased responsiveness and flexibility. unpredictability and turbulence in the international business environment suggest that traditional planning approaches that assume linear growth may give way to more scenario‐based planning. this will encourage firms to contemplate a variety of possible futures and better prepare them for unanticipated events. similarly, contingent‐based continuity plans help businesses continue running even during a crisis. © 2006 wiley periodicals, inc. managing threats in the global era: the impact and response to sars wei-jiat tan ■ peter enderwick in early 2003, the sars virus brought disruption of public and business activities in many areas of the world, particularly asia. as a result of its impact, sars quickly established itself as a new kind of global uncertainty and posed challenges for traditional methods of risk management. this article examines the impact that sars has had through means of a case study and builds on this to provide recommendations for how uncertainty may be managed in an increasingly globalized world. reconsideration of strategic and risk-management approaches have become necessary. supply-chain management and corporate strategy require a fundamental rethink to balance the pursuit of efficiency with increased responsiveness and flexibility. unpredictability and turbulence in the international business environment suggest that traditional planning approaches that assume linear growth may give way to more scenario-based planning. this will encourage firms to contemplate a variety of possible futures and better prepare them for unanticipated events. similarly, contingent-based continuity plans help businesses continue running even during a crisis. © 2006 wiley periodicals, inc. province in november 2002 (horstman, 2003) . largely due to the failure of the chinese authorities to recognize the seriousness of the problem or provide international notification, sars quickly spread throughout china (enderwick, 2003 ) and, in february 2003 kong was to provide the global accelerant from which sars quickly spread, particularly to neighboring asian countries, including vietnam, singapore, and taiwan. the high incidence of travel between toronto and asia also saw the outbreak of sars in canada (enderwick, 2003) . over the course of the outbreak, sars infected more than 8,000 people and left more than 900 dead in 32 countries, with 349 of those fatalities recorded in mainland china. while infectious epidemics are by no means a new phenomenon, there is little doubt that sars had a greater impact on the international business environment than its predecessors. this is largely due to the fact that countries and economies are now more interconnected than before, allowing for easy transmission of a virus like sars. while literature does exist on the management of risk, sars is indicative of a new kind of uncertainty, the impact and management of which must be analyzed in the context of a world that has become increasingly globalized. this article examines the impact of sars on the international business environment and considers how managers can incorporate events such as sars into an ongoing riskmanagement framework. the discussion comprises four substantive sections. the first section provides a contextual background of the international business environment at the time of the sars outbreak. the second section provides a case-study discussion of the impact of sars on international business operations. drawing on this case study, the third section examines some strategic implications for firms seeking to cope with the new types of uncertainty such as that created by sars. concluding thoughts are provided in the final section. there is little doubt that businesses and firms operate in an increasingly globalized and integrated environment. globalization manifests itself as an increase in cross-border movements of goods and services, capital and technology flows, tourist and business travel, and the migration of people (craig & douglas, 1999) . this integration has been made possible by declining trade and investment barriers, the growth of free trade agreements, and regional integration. another driver of globalization has been technological advancements in communications and transportation. the use of satellite links, company intranets, and the internet has improved communication networks and linkages across borders, thereby lowering the costs of coordinating and controlling a global organization. modern communication systems have also enabled the rapid dissemination of information, leading to some convergence of consumer tastes and preferences. in addition, developments in transportation have allowed the rapid supply of people, goods, and services from distant locations. globalization has provided significant opportunities for firms to reconfigure their supply chains and globalize their production processes, thereby reaping economies of scale and taking advantage of national differences in the cost and quality of factors of production. however, globalization also presents some very real challenges. the interconnectedness that is characteristic of globalization also means that local conditions are no longer the result of purely domestic influence (thomas, 2002) . indeed, crises in one country now have the ability to affect other countries around the world. this was evident from the 1997 asian financial crisis, september 11, and sars, where such crises had a "ripple effect" so that their direct and residual impacts were felt far from their epicenters (enderwick, 2001) . the international business environment has never been predictable or certain. however, the scale of investments in today's globalized world, coupled with rapid technological change, shortening product life cycles, and the increasing aggressiveness of competitors (volberda, 1996) , has increased the uncertainty and complexity of operating in such an environment. indeed, it has been stated that: globalisation and technology are sweeping away the market and industry structures that have historically defined the nature of competition. the variables that can profoundly influence success and failure are too numerous to count. that makes it impossible to predict, with any confidence, which markets a company will be serving or how its industry will be structured, even a few years hence. (bryan, 2002. p. 18) accordingly, unlike past decades that exhibited long, stable periods in which firms could achieve sustainable competitive advantage, competition is increasingly characterized by short periods of advantage, the interconnectedness that is characteristic of globalization also means that local conditions are no longer the result of purely domestic influence. marked by frequent disruptions (volberda, 1996) . in such hypercompetitive environments, risk is not so much predicted as it is responded to. accordingly, strategies that focus solely on efficiency and pay close attention to cost structures must now be reassessed in light of the inflexibilities they exude in a changing and uncertain environment. further, exploitation of core competencies that were once seen as a precondition to success are now viewed as presenting the risk of core rigidities (volberda, 1996) . accordingly, a higher premium is now being placed on considerations such as flexibility, responsiveness, and adaptiveness. one area in which this need for flexibility has been recently espoused is that of supply-chain management. until recently, companies focused on developing tightly controlled supply chains, with the emphasis on efficiencies in operations and distribution. while tightly controlled supply chains work well in stable environments with minimal disruptions, we are now experiencing an environment of increasing unpredictability, where disruptions are more common. accordingly, the ability to respond to the resulting fluctuations in demand is paramount, and considerations such as flexibility and responsiveness are now considered as important as efficiency (morton, 2003) . globalization has also seen the emergence of a new type of risk, with a nature quite different to what was traditionally regarded as risk in international business. in the 1970s, 1980s, and even 1990s, risk was generally equated to financial, exchange rate, and inflationary risks and, in particular, "political risk," which was reflective of host-government hostility toward foreign investment during much of this time. political risk was country-specific and could be summed up as the likelihood that a multinational enterprise's foreign operations could be constrained by host-government policies through measures such as forced divestment, unwelcome regulation, and interference with operations. accordingly, risk management was also country-specific and involved assessing the riskiness of a particular country through a variety of predictive approaches. where a country was deemed too risky, the firm would avoid investing or withdraw its current investment. other risk-management devices also involved responding to risk that largely emanated from host governments. indeed, defensive political risk-management strategies involved locating crucial aspects of the company's operations beyond the reach of the host, while integrative strategies aimed to make the firm an integral part of the host society, thereby minimizing the risk of government intervention. however, as the world economy has become increasingly global, political risk, while still present, is arguably not as pressing as before. this is largely because of a change in attitude toward trade and investment, with most countries now encouraging foreign direct investment (fdi). indeed, between 1991 and 2003, more than 165 countries made 1,885 changes in legislation governing fdi, with 95 percent of these changes involving the liberalization of fdi regulations. this has also been supported by a dramatic increase in the number of bilateral investment treaties, as well as regional and global free-trade agreements (united nations conference on trade and development [unctad], 2003) . at the same time, we have witnessed the emergence of a new type of environmental business threat that has manifested itself in incidents such as global terrorism, sars, financial crises, and computer viruses, all of which have the ability to disrupt a firm's operations. enderwick (2003) describes such threats as being sudden, unexpected, and unpredictable, with the ability to spread quickly through global processes and forces, thus having a widespread impact but with a disproportionate impact on regions, sectors, and industries. clearly, risk is no longer country-specific, nor is it limited to threats from host-government actions. instead, it is global and systemic, and capable of being perpetrated by individuals or small groups. further, such threats do not simply affect a firm's operating conditions, but also its overall viability, as they can cause severe disruptions, threatening the very survival of the firm. accordingly, new strategies for managing this type of threat are required, and they cannot be avoided by simply deciding not to invest in a particular country, or by using strategies centered on host governments. however, while in the past risk was largely seen as negative, it should be noted that these environmental uncertainties provide both challenges as well as opportunities for those businesses that have the ability to respond quickly and effectively (enderwick, 2003) . it is useful to clarify the exact nature of a disruption such as sars in terms of risk and uncertainty. while these terms are often used interchangeably, they have distinct meanings (knight, 1921) . according to knight's analysis, risk is considered as the variation in potential outcomes to which an associated probability can be assigned. in statistical terms, while the distribution of the variable is known, the particular value that will be realized is not. uncertainty exists when there is no understanding of even the distribution of the variable. for decision making, uncertainty is a greater problem than risk. because probabilities can be attached to risk, options to mitigate risks through clearly, risk is no longer countryspecific, nor is it limited to threats from host-government actions. insurance or hedging are possible. because probability cannot be assigned to uncertainty, instruments to reduce uncertainty are not available. we suggest that sars (and similar recent environmental disruptions such as global terrorism, computer viruses, and avian bird flu) are uncertainties, not risks. these types of disruptions share a number of characteristics. first, they can be considered as "jolts" (meyer, 1982 ) that occur randomly. no one anticipated the emergence of sars, or any similar virus. because such events are not continuous or even regular, it is not possible to assign probabilities to them. second, the nature of these jolts is such that they evolve, changing their forms, and do not simply recur. for example, viruses such as sars and avian bird flu are capable of mutating and assuming different forms with differing impacts. in the case of avian bird flu, there have been recent reports of the first full case of human-to-human transmission, and a recurring fear is that it could mutate into a human pandemic with devastating effects. similarly, global terrorism assumes a variety of forms including car bombs, suicide bombers, aircraft as weapons of destruction, and chemical attacks. this makes it difficult to use historical experience as a predictor of future occurrences and impacts. third, the impact of these uncertainties tends to be concentrated, either by sector or by geographical location. as the next section makes clear, the primary effects of sars were experienced in asia and disproportionately affected the transport, tourism, and medical industries. the impacts of natural disasters such as extreme weather events or financial and political problems appear to be more widely and randomly distributed. this is not to suggest that sars did not become a global issue; however, its global spread was clearly traceable to well-established patterns of personal and business contact. to understand the impact that sars had, it is useful to employ a "concentric band" framework (enderwick, 2001) , which sees a crisis like sars as having a "ripple effect," as illustrated in figure 1 . the band closest to the center represents the primary or immediate impacts of sars. moving outward, the next band represents secondary impacts that are likely to develop over the short to medium term, followed by those impacts that result from the various responses to sars. finally, the outermost band represents the longer-term issues that are likely to arise out of the sars crisis. we suggest that sars (and similar recent environmental disruptions such as global terrorism, computer viruses, and avian bird flu) are uncertainties, not risks. the sectors that were immediately and significantly affected by sars were those that involved regular human contact (enderwick, 2003) . accordingly, asian tourism and transport were hit especially hard. flights to asia were cancelled, with sars hot zones like singapore and hong kong suffering the most (lemonick & park, 2003) . the hotel business in asia also suffered and plummeted 25% between february and march (lemonick & park, 2003) , with hong kong five-star hotels at occupancy rates of 10% and singapore occupancies falling from a norm of 70% to 20% to 30%. as fewer tourists arrived and locals chose to stay home to avoid public places, stores and restaurants in singapore and hong kong were almost empty at peak hours (engardio, shari, weintraub, arnst, & roberts, 2003) . sars also had a significant impact on medical facilities and staff. rapid increases in the number of cases quickly exposed inadequate surge capacities in hospitals and public health systems and a lack of protective gear, with the problem exacerbated by health workers falling victim to the disease (world health organization [who], 2003) . in beijing, a shortage of bed space in hospitals meant suspected sars cases could not be hospitalized and quarantined quickly, contributing to the spread of the illness (hutzler, 2003) . to reduce this heavy burden on existing hospitals, governments invested sub(who, 2003) . indeed, hong kong spent hk$400 million to create 1,280 hospital beds and a further hk$100 million to train medical staff (fowler & dolven, 2003) . food industry. sars also led to secondary impacts in the food industry. food prices in asia plummeted as restaurants cut down on purchase orders, thereby affecting the region's farmers and fishing fleets (carmichael, mcginn, & theil, 2003) . supermarket sales in key markets such as singapore, taiwan, japan, and china also fell due to a loss in consumer confidence (tso, 2003) , although increased food preparation at home-and, in some cases, panic buying-had a positive impact on supermarkets. manufacturing. there was widespread belief that a major disruption like sars could paralyze just-in-time supply chains by holding up production and the flow of goods and services between countries due to port closures, travel restrictions, and forced closures of manufacturing plants if employees got infected. despite such media hysteria, the impact on the manufacturing sector was not that pronounced. this was largely because asian companies took preemptive steps as soon as the epidemic became known and increased production in the anticipation that there could be a problem, building up their buffer levels in inventory and safety stock. the result was very few plant shutdowns in the far east (morton, 2003) . investment. investment in asia was also affected, as international firms postponed plans to begin or expand operations in asia. real estate sales fell drastically as buyers refused to travel to hong kong or china to look at building sites (bodamer, 2003) . similarly, the cancellation of trade fairs affected manufacturers, particularly in china, who rely on such fairs to sell their goods (ben-ami, 2003) . the capital markets did not emerge unscathed, and it is estimated that overall fundraising in asia fell 10-20% in 2003 due to sars (hamlin, smith, meyer, kirk, & horn, 2003) . stock prices of those companies with extensive operations in asia also fell (bolger, 2003) . unemployment. given that the tourism and hospitality industries that were hit hardest by sars were labor-intensive, there was also a corresponding rise in unemployment in sars-affected countries, mainly concentrated in these industries. in the worst-hit countries like china, hong kong, singapore, taiwan, and vietnam, the tourist industry faced losses of 30% of travel and tourism employment. the global impact of sars was also expected to bring a 15% loss in the tourism workforce in indonesia and oceania, and 5% in the rest of the world (bita, 2003) . economy and growth. the impact of sars on regional economies and projected economic growth was also substantial. indeed, economists estimate that china and south korea have each suffered $2 billion in losses in tourism, retail sales, and productivity due to sars, with japan, hong kong, taiwan, and singapore estimated to lose approximately $1 billion each. toronto, severely affected by sars, was losing $30 million a day at the height of the crisis. in terms of the global cost of sars, this is estimated to have reached $30 billion. positive effects. while negatively affecting a number of industries, others were able to capitalize on the opportunities that sars provided. the outbreak of sars saw a worldwide surge in demand for facemasks, given that sars is largely transmitted by coughing and sneezing. this saw demand outstripping supply, forcing large manufacturers like 3m to switch to 24-hour production (hopkins, 2003) . video conferencing was another industry to benefit, as asian employers sent their workers home and cancelled overseas conferences, meetings, and visits. while no industrywide traffic figures are available, many video-conferencing services reported spikes in usage in asia since the sars epidemic began. indeed, intercall, a hong kong teleconferencing company, doubled its business in march and april 2003 and saw a 200% increase in users signing up for the service in hong kong in april and a 30% rise in new customers worldwide (flynn, 2003) . individual firms. in response to the sars crisis, businesses undertook a number of measures to minimize the impacts of sars. business travel bans to sars-affected areas were a common risk-management device, as were temporary quarantine measures for those who had recently traveled to such areas (e.g., working from home, segregating them from other employees). less common was the repatriation of employees, and according to one survey, less than 7% of firms had brought employees home from sars-affected regions or placed them in another country (minihan, 2003) . many firms implemented business continuity plans, with some firms setting up operations at parallel sites or shifting operations altogether to other office complexes (hamlin et al., 2003) . it played a major role in all continuity plans, with firms issuing notebooks capable of accessing the firm's intranet so employees could work from home and employing technology such as video conferencing to ensure business continued as usual (lim, 2003) . government spending. in response to sars, governments in asia also took action and increased government spending to sarsaffected industries. in china, the central government launched the largest-ever tax relief package to help the aviation, tourism, and retail sectors recover from the sars epidemic, estimated to cost several billion yuan (pun, 2003) . the hong kong government similarly offered a $1.5 billion relief package for local businesses (carmichael et al., 2003) and has invested $80 million to revitalize the tourism industry, with part of this money to be spent on a worldwide campaign to reassure visitors (coulter, 2003) . domestic measures. the outbreak of sars prompted governments to take decisive and often drastic action to curb the spread of sars. the singapore government authorized measures such as closure of schools and universities, temperature checks twice daily (at home and in the workplace), home quarantine for those exposed to sars, and triage centers at the entrances of hospitals to identify and separate sars patients (yew, 2003) . in taiwan, remote video monitors were installed in quarantine households to ensure against any quarantine violations (chinese government information office, 2003). in china, far more draconian measures were taken, arguably to compensate for the government's previous lack of responsiveness and reluctance to report the seriousness of the sars crisis in china. public entertainment spots in beijing were closed down, as were public schools and universities (kaufman & chen, 2003) . stricter border control. governments also responded to the rapid spread of sars by implementing stricter border control measures and the collection of detailed health and contact information. at one extreme, several countries, such as taiwan, banned individuals who had traveled to sars-affected regions (kaufman & chen, 2003) . other strict measures included requirements that travelers from sars-affected areas wear facemasks for two weeks after arrival and special powers of quarantines. greater international cooperation. in recognition of the fact that sars is a global problem, governments have also been more willing the outbreak of sars prompted governments to take decisive and often drastic action to curb the spread of sars. to cooperate to prevent its further spread. one such example was the "special asean-china leaders meeting on sars" held on april 29, 2003, in bangkok, where ten association of southeast asian nations leaders and the chinese premier held crisis talks on how to fight the virus. this was consistent with the way the international community rallied together to understand and treat the sars virus. indeed, the global response was unprecedented and saw 11 laboratories around the world that were previously strong competitors sharing information freely. importance of the state. as illustrated by the response-generated impacts, it is clear that the role of the state in international business is still important, as the sars crisis saw the state take on a major crisis management role. governments were responsible for mobilizing resources such as hospitals and other medical facilities, as well as coordinating public health care. quarantine measures also had to be introduced, monitored, and enforced, coupled with surveillance capacity to monitor and report quickly on disease outbreaks and their progress. finally, governments were responsible for handling the economic slowdown caused by sars and providing assistance to severely affected industry sectors through increased public spending. these tasks are not amenable to market forces and highlight the unlikelihood of globalization leading to the elimination of the nation-state (enderwick, 2003) . open and transparent government. connected to the idea of the growing importance of the state is the need for open and transparent government. the reasons for this are twofold. first, the need for transparency is paramount if a crisis is to be contained. china's initial understating of the number of confirmed cases, refusal to give daily reports, and blocking of who specialists from visiting guangdong (the origin of sars) allowed sars to spread rapidly (lavella, 2003) . it was only after china began reporting the true seriousness of the situation and allowed who officials to investigate that the sars crisis slowly came under control. in contrast, vietnam was able to contain the virus relatively quickly through prompt and open reporting, the early request for who assistance, and rapid case detection, isolation, infection control, and vigorous contact tracing (who, 2003) . accordingly, any attempt to conceal a crisis such as sars for fear of the social and economic consequences can only be regarded as a short-term measure that ultimately risks the situation spiraling out of control (who, 2003) . second, it is in the interests of government to be open and transparent, as in today's turbulent and unpredictable environment, investors are now placing a premium on governments that can be trusted. indeed, china's behavior during the sars crisis has resulted in a loss of credibility in the international community (who, 2003) and created fears among foreign investors about doing business in china. fear. another long-term issue is how to handle the fear and panic that accompanied sars, given it was fear that spread faster and had a greater impact than the disease itself. indeed, as far as infectious diseases go, sars is relatively mild; it is harder to catch than the flu, with a fatality rate of only 6% to 8%. despite this, sars had a devastating effect on the tourism industry, as people became unwilling to fly to sars-affected regions. business was also affected, as foreigners cancelled conferences and meetings in countries in asia, such as south korea, that had not reported a single sars case. this was largely because asia is viewed as "one place" (hamlin et al., 2003) , and therefore the crisis in one part of asia was extrapolated to the whole. personal behavior. sars is also likely to have a longer-term impact with regard to personal behavior and cultures. in singapore, through massive public-education efforts to promote public-health practices, there has been a noticeable change in personal habits. people wash their hands more, and public and restaurant toilets are much cleaner. similarly, people are using serving spoons for shared dishes when eating, and sick people are more likely to see a doctor when they become ill rather than do nothing (borsuk & prystay, 2003) . sars could also see a change in the way that business is done in asia. as mentioned earlier, many businesses turned to video conferencing at the height of the outbreak. video conferencing was found to be an effective tool to maintain communication during the crisis-and without the associated travel/hotel costs and jet lag (hamlin et al., 2003) . however, tensions remain in that asian businesspeople place a high value on personal contact and prefer to meet clients and customers face-to-face (minihan, 2003) . the reality of globalization-the need for openness and trust. the sars crisis has illustrated the consequences of living in an interconnected world and has further clarified the nature of globalization. technology, such as e-mail, instant telephone communication, and the internet, has united people and increased enormously the number of contacts that people have. these contacts are eventually purthe sars crisis has illustrated the consequences of living in an interconnected world and has further clarified the nature of globalization. sued through personal visits or through business meetings, conferences, and plant tours. further, advancements in air travel mean any place in the world is accessible within 24 hours, and coupled with the movement of commerce, this has brought china and other developing nations out of relative isolation (engardio et al., 2003) . the result has been a global network within which an infectious disease like sars can spread, and while diseases in the past have taken weeks or months to spread, sars was literally transmitted within days, setting a record for the speed of continent-to-continent transmission (borenstein, 2003) . accordingly, while globalization has provided the world with many benefits, it also brings risks, and increased connectedness also means that threats have a greater global impact. this implies that countries must understand that they can no longer insulate themselves from threats such as sars given the open borders of a globalized world, and there must be an increasing recognition that crises like sars are not simply a regional problem, but a global one. the case study illustrates that sars represents a new kind of threat and has implications for the way uncertainty is managed in the future. risk-management strategies that were largely country-focused are no longer adequate in themselves, given that this new type of threat is global and systemic. despite the high levels of uncertainty associated with events such as sars, this should be incorporated into decision making. a lack of precise knowledge does not preclude decision makers from further information gathering or from making decisions about likely probabilities of events occurring. as has been recognized, traditional strategic management approaches encourage perceptions of uncertainty in a binary fashion (courtney, kirkland, & viguerie, 1997) . the world is seen either as sufficiently certain that precise, and usually single, predictions of the future can be made, or that uncertainty renders such an approach totally ineffective. in the latter case, there may be a temptation to abandon analytical approaches and to rely wholly on gut instinct. courtney et al. (1997) argue that in many cases uncertainty can be significantly reduced through careful search for additional information; in effect, much that is unknown can be made knowable. the uncertainty that remains after the most thorough analysis they term residual uncertainty. there are a number of approaches that offer insights into how to manage uncertainty. the simplest approach is to ignore it. this can be done by developing a "most likely prediction" often based on "expert input" or by assigning a margin of error to key variables. each of these approaches yields a single unequivocal strategic option by either ignoring uncertainty or assigning it a probability. neither approach is satisfactory. ignoring an uncertain environmental event is clearly dangerous. assigning probabilities to unique events is invalid. even subjective probability derived from expert analysis is untestable and arbitrary. miller (1992) highlights a useful distinction between financial riskmanagement and firm-strategy approaches to managing environmental uncertainties. financial risk-management techniques such as insurance and futures contracts reduce the firm's exposure to specific risks without changing the underlying strategy. but, as noted earlier, such techniques only apply to risks, not uncertainties. in the case of an event such as sars, strategic responses, which attempt to mitigate the firm's exposure to uncertainties, are likely to be more useful. miller (1992) identifies five generic strategic responses to environmental uncertainties: avoidance, control, cooperation, imitation, and flexibility. avoiding an event such as sars through divestment, delayed entry, or a focus on low uncertainty markets is difficult. the irregular occurrence and variable impact of such events is unlikely to justify divestment. similarly, their unpredictable and evolving nature makes postponement or niching very difficult. uncertainty control strategies based on political lobbying, vertical integration, or enhanced market power are not an effective counter to sars. in the same way a cooperative strategy deals primarily with behavioral risk and is not likely to be effective, neither is an imitative strategy that addresses competitive rivalry. of more value is the management of uncertainty through organizational flexibility. flexibility focuses on the ability of the organization to respond and adapt to significant environmental changes. high levels of flexibility imply lower costs of organizational adaptation to uncertainty. in contrast to approaches that try to increase the predictability of uncertain events, flexibility strategies emphasize internal responsiveness, irrespective of the predictability of contingencies. a widely used strategy for increasing flexibility is diversification, whether of products, markets, or sources of supply. with regard to sars, the key strategic responses are likely to occur in the areas of supply-chain management, diversification, scenario planning, and ensuring business continuity. we consider these in more detail. in contrast to approaches that try to increase the predictability of uncertain events, flexibility strategies emphasize internal responsiveness, irrespective of the predictability of contingencies. the need for flexibility and responsiveness is no more evident than in the area of supply-chain management. while the manufacturing sector did not suffer severe disruptions given the relatively quick manner in which sars was contained, had the crisis persisted and impeded the flow of goods and services and/or caused plant shutdowns, major disruptions to manufacturing and distribution would have occurred. indeed, potential disruptions quickly became apparent as firms contemplated the possible effects of travel bans. firms recognized that problems could arise if a factory needed repair help to continue manufacturing but engineers could not be sent due to travel bans (wonacott, chang, & dolven, 2003) . in combination, these issues highlight the need for flexible supply chains that can respond quickly to changes in demand and cope with major disruptions. to develop this responsiveness, firms can do a number of things. first, in handling a crisis like sars, every moment of delay is critical, and the earlier you can get the supply-chain network to respond, the easier it is to manage (mcclenahen, 2003) . accordingly, to ensure prompt action, firms must ensure quicker access to and action on information, preferably at the source, that may provide timely warnings. this certainly reiterates the importance of management basics such as environmental scanning and monitoring, and the need for it to be an ongoing activity. however, such environmental scanning will no longer simply involve monitoring the local political environment, as often happened in the past. instead, it will need to encompass the larger regional and global environment. further, while host-country managers previously played a vital role in conveying information about the political environment back to higher management, what will become increasingly important is the ability to channel this information to the firm's affiliates in other parts of the world and to share any lessons learned from the crisis so that these affiliates may benefit from them. this further reinforces the value of establishing an integrated global network and facilitating intracompany learning. the need to be responsive also has implications when choosing manufacturing locations. china's initial unresponsive and surreptitious approach to the sars crisis illustrates that while cost of production and a low-cost labor force have been, and will still remain, dominant considerations in the investment decision, stability, reliability, and predictability are likely to be given a higher premium. given the unexpected and sudden nature of threats such as sars, management is also likely to add to its investment criteria how well various parts of the world are equipped to deal with crises (mcclenahen, 2003) . in handling a crisis like sars, every moment of delay is critical, and the earlier you can get the supply-chain network to respond, the easier it is to manage. firms may also opt to switch from large production sites in a single location like china to smaller, but multiple facilities around the world, thereby creating a global network of manufacturing facilities. this allows increased flexibility so that if disruptions to manufacturing or the supply chain occur in one country, the firm has the ability to vary plant loadings and increase production elsewhere (maccormack, newman, & rosenfield, 1994) . such a manufacturing network will considerably increase the complexity of coordinating the global supply chain. however, this may simply be a necessary trade-off for firms wishing to balance cost-efficiency and responsiveness in managing their global supply network. alternatively, firms may find that establishing their own manufacturing operations is too risky an investment and may instead choose to outsource, thereby continuing a trend that has been taking place over the last ten to fifteen years. however, sars has highlighted the value of diversifying the supply base and sourcing from multiple locations (mcclenahen, 2003) , thereby reducing a firm's dependence on a single supply location (maccormack et al., 1994) . indeed, outsourcing offers the flexibility to switch sourcing to another country if a crisis like sars should disrupt supply-chain operations in a particular country. accordingly, many global firms are considering back-up suppliers outside of asia, with latin america and eastern europe likely locations. the way in which outsourcing is conducted may also change as events such as sars have increased the reluctance and inability to travel. indeed, rather than establishing their own network of suppliers, firms may increasingly turn to third-party logistics providers such as bchinab, who have offices on u.s. soil but manage a sprawling network of 1,500 factories, tool and die shops, materials suppliers, and plastic molders in china. by using such a provider to manage their logistics operations abroad, firms can reduce the requirement of traveling overseas to negotiate price quotations and samples and having to deal with chinese manufacturers directly (marshall, 2003) . the move toward responsiveness may also necessitate less of a focus on cost-efficiency and a loosening of the tight control that is currently held over supply chains. after sars, firms may have to reexamine their supply chains to identify potential problems and bottlenecks and allow for enough slack to accommodate delays and potential problems that can arise. such readjustments may include keeping buffer inventory and safety stock to hedge against uncertainties. while such measures incur costs, the costs of disruptions to an unresponsive supply chain may prove more severe-these being extended lead times, lost service contracts, and higher emergency logistics costs. another lesson from the sars crisis may be to illustrate the risks of having too focused a corporate strategy and the potential benefits of diversification. in the same way that financiers diversify their investment portfolios to decrease variability in their rate of return, a portfolio approach to corporate strategy ensures that even if some of the firm's corporate initiatives fail, the success of other initiatives achieves an overall favorable outcome for the firm (bryan, 2002) . this is especially so where the impacts of events like sars and terrorism are disproportionately borne by certain sectors or locations (enderwick, 2003) . accordingly, corporate strategies may now require this "portfolio approach" so that a firm is not overly focused on one sector or location. for example, the sars crisis, coupled with a more global world market, is likely to see exporters increase diversification in both products and geographical markets. on a larger scale, economies may also look to become more diversified, as sars revealed that many asian countries were heavily dependent on the services sector (shanmugaratnam, 2003) . for businesses, related diversification appears to be superior to unrelated diversification (rumelt, 1974) . as noted earlier, the nature of environmental threats is changing and is increasingly difficult to anticipate. indeed, sars illustrated the difficulty of trying to predict where the next threat will come from and has called into question traditional linear planning and forecasting. such planning techniques work on the assumption that the environment in the future will be very much like today, and that extrapolation is meaningful. however, in today's turbulent and disruptive environment, this assumption is no longer valid and what are needed are plans that are flexible enough to adapt to the circumstances (pritchard, 2003) . accordingly, the sars crisis is likely to accelerate the current trend toward the adoption of scenario planning. rather than forecasting a specific future or "most likely outcome," scenario planning builds on existing knowledge to develop several plausible future scenarios and then necessitates constructing robust strategies that will provide competitive advantage no matter what specific events unfold. as such, it encourages firms to think about "worstcase" scenarios, which may include technological, economic, political, or environmental calamities. schnaars (1986) discusses a number of different approaches that can be adopted when designing strategy for multiple scenarios. accordingly, scenario planning forces firms to pay closer attention to internal, external, and the broader global environmental factors that may influence the firm's future. this process challenges firms to avoid complacency in their strategy formulation and encourages managers to think more broadly and unconventionally and view events with a new perspective (lohr, 2003) , an essential requirement in trying to prepare for unknowable shocks and crises (kennedy, perrottet, & thomas, 2003) . arguably, sars will "shake things up" and encourage strategists to further consider more diverse and unexpected scenarios, as prior to sars, many strategic-planning scenarios had not been done with a disease in mind (hamlin et al., 2003) . the federal emergency management agency estimated that the costs of disasters are 15 times greater than the costs of preparing for them (read, 2003) . indeed, events such as september 11 and sars illustrated the value of business continuity planning, which essentially involves strategies, services, and technologies that enable firms to prevent, cope with, and recover from disasters, while at the same time ensuring the continued running of the business (read, 2003) . while the sars crisis certainly reinforced the need for such planning, it also provided implications for the content of such continuity plans in the future. while some continuity plans actioned during the sars crisis saw the establishment of parallel operations or the shifting of work to safer regions/locations so as to create back-up locations (minihan, 2003) , it is apparent that cost and time factors can mitigate against this being a feasible option for many firms (read, 2003) . however, what sars did demonstrate, and what has been suggested by business continuity writers, is that technology may be the key and that "telecommuting" or "teleworking" should be a part of any company's business continuity plan (jimenez, 2003) . accordingly, firms need to ensure they have the technological infrastructure to support the ability to work from home or from remote locations, in case an event like sars forces offices to close. at a basic level, this requires employees to have access to the firm's information/data or intranet from home, and such access must be secure. to reduce reliance on a single data source, firms are also beginning to employ "network storage" or "data mirroring" technologies so that key transactional data is copied in almost real time to other locations, thus creating a back-up (newing, 2003) . the sars crisis also highlighted the usefulness of video-conferencing and teleconferencing technology, particularly given that higher bandwidth speed now makes such conferencing a more viable option. while travel bans and the reluctance to travel persisted, conferencing technology allowed continued contact with clients and overseas partners, and allowed important meetings to take place. while such technologies may not immediately become the industry norm, given that personal contact in asian countries is highly valued, their introduction as a riskmanagement device may secure their gradual acceptance as their longterm benefits become more obvious. indeed, anecdotal evidence suggests that firms who invested in such technology during the sars crisis will continue to use this technology in the future (neuman, 2003) . while technology is important, it alone may not be sufficient, and the human element in continuity plans is also important. key workers must be identified and must have access to the right it equipment and training to enable them to carry on working if the office has to be shut down. key staff should also be spread among different sites, as one organization learned the hard way, when it lost its entire it recovery team located in the world trade center during september 11 (newing, 2003) . finally, firms must also realize that telecommuting has a human element, as workers stuck at home often experience feelings of isolation, anxiety, and depression (wayne, 2003) . accordingly, planning must incorporate how such psychological problems can be addressed. as mentioned in the case study, the impact from the fear of sars was greater than sars itself, and has implications for how a crisis such as sars is managed in the future. what is glaringly obvious is the need for full disclosure of information, given that the panic about sars was fueled when information was concealed or only partially disclosed, leading to rumors and exaggeration (who, 2003) . employees need facts from, and questions answered by, reliable and credible sources. responses that have proved effective include establishing 24hour hotlines to communicate with staff and directing staff to other information sites, such as the who web site (aldred, 2003) . the sars crisis occurred against the backdrop of a highly interconnected and integrated world economy and has established itself as a new kind of global threat, along with other unpredictable events such as the asian financial crisis and global terrorism. rather than having a localized impact, the impact of sars has been far-reaching, even if this was largely from the fear of the virus rather than the virus itself. the impact from the fear of sars was greater than sars itself, and has implications for how a crisis such as sars is managed in the future. in table 1 , we summarize some of the key differences between the traditional and new forms of risk. for governments, the message is clear-even in a world without borders, the state will still have a role, given that unsupported market processes are insufficient by themselves to solve the problems created by sars. however, with this responsibility comes the requirement that governments act in an open and transparent manner, something that is arguably a precondition for the effective handling of a crisis such as sars. global phenomena such as sars also emphasize the need for a collective response and more openness and cooperation among nations. for businesses, the ability of sars to significantly disrupt international business and the speed in which the disease was transmitted suggests that the nature of this new kind of event is global and systemic, and accordingly warrants a broad and encompassing risk-management approach. the implication is that firms must put a higher premium on strategies that emphasize flexibility and responsiveness. indeed, firms will find value in increasing diversification, whether this is in sourcing or in corporate strategy. planning must also become less linear and more contingent-based, and in considering a range of possible future scenarios, firms will be in a better position to handle disruptions that increasingly cannot be predicted. technology also appears to offer a possible solution as a risk-management device, and we are likely to see technologies such as video conferencing become a commonplace feature in offices of the future. further research on the role that strategies, structures, and resources play in anticipating, responding, and adjusting to environmental disruptions is necessary (meyer, 1982) . in sum, while an event like sars produces considerable challenges, it also offers insights into how firms can better equip themselves to manage within an increasingly turbulent and unpredictable environment. virus challenges efficacy of risk management plans the cost of sars virus threat to 227,000 tourism jobs. the australian sars fears continue to weigh on investors. the times sars is the latest in explosion of new infectious diseases how singapore beat the virus-and still awaits it the mckinsey quarterly: 2002 special edition-risk and resilience chinese taipei to brief on its measure in response to the sars epidemic hong kong spends pounds 80m on recovery. travel trade gazette strategy under uncertainty configural advantage in global markets terrorism and the international business environment. aib newsletter, fourth quarter responding to new environmental uncertainties: terrorism and sars in the global business environment deadly virus-the economic toll: delayed deliveries, closed factories, and the spectre of recession teleconference business up as sars fuels demand our future with sars sars test. institutional investor sales of masks, air filters soar as sars spreads; manufacturers get busier as fliers try to protect themselves the outbreak: the virus factories of southern china the sars outbreak: beijing confronts crucial test in its struggle against sars-mayor cites big shortages of beds hong kong cases relapse technology key factor in business continuity the sars outbreak: beijing imposes new sars curbs city's theatres and cafes are shut down as china lists another 161 cases scenario planning after 9/11: managing the impact of a catastrophic event risk, uncertainty and profit is the panic worse than the disease? the truth about sars. time technology keeps asia running despite sars scenario planning" explores the many routes chaos could take for business in these very uncertain days the new dynamics of global manufacturing site location many unhappy gains. crain's get ready for the next sars adapting to environmental jolts a framework for integrated risk management in international business work goes on despite dangers asia, sars and the supply chain sars leaves a silver lining: companies save by videoconferencing flexible models to face up to the unexpected: disaster recovery: business continuity has to fight it out with other areas for funding. financial times strategy, structure and economic performance how to develop business strategies from multiple scenarios dealing with sars-rebuilding confidence and taking opportunities essentials of international management: a cross-cultural perspective seafood prices affected by sars world investment report 2003: fdi policies for development. national and international perspectives united nations toward the flexible form: how to remain vital in hypercompetitive environments executives in singapore chafe at sars-related travel bans china's handling of sars virus concerns investors-new leadership's image suffers amid signs beijing failed in crisis management key: cord-273175-bao8xxe2 authors: tran, viet-thi; ravaud, philippe title: covid-19–related perceptions, context and attitudes of adults with chronic conditions: results from a cross-sectional survey nested in the compare e-cohort date: 2020-08-06 journal: plos one doi: 10.1371/journal.pone.0237296 sha: doc_id: 273175 cord_uid: bao8xxe2 background: to avoid a surge of demand on the healthcare system due to the covid-19 pandemic, we must reduce transmission to individuals with chronic conditions who are at risk of severe illness with covid-19. we aimed at understanding the perceptions, context and attitudes of individuals with chronic conditions during the covid-19 pandemic to clarify their potential risk of infection. methods: a cross-sectional survey was nested in compare, an e-cohort of adults with chronic conditions, in france. it assessed participants’ perception of their risk of severe illness with covid-19; their context (i.e., work, household, contacts with external people); and their attitudes in situations involving frequent or occasional contacts with symptomatic or asymptomatic people. data were collected from march 23 to april 2, 2020, during the lockdown in france. analyses were weighted to represent the demographic characteristics of french patients with chronic conditions. the subgroup of participants at high risk according to the recommendations of the french high council for public health was examined. results: among the 7169 recruited participants, 63% patients felt at risk because of severe illness. about one quarter (23.7%) were at risk of infection because they worked outside home, had a household member working outside home or had regular visits from external contacts. less than 20% participants refused contact with symptomatic people and <20% used masks when in contact with asymptomatic people. among patients considered at high risk according to the recommendations of the french high council for public health, 20% did not feel at risk, which led to incautious attitudes. conclusion: individuals with chronic conditions have distorted perceptions of their risk of severe illness with covid-19. in addition, they are exposed to covid-19 due to their context or attitudes. the novel coronavirus disease 2019 (covid-19) pandemic threatens to saturate healthcare systems all around the world [1] . on april 7 th of july 2020, 6,416,828 cases were confirmed in 213 countries, with 382,867 deaths [2] . in france, 152,444 cases were confirmed, with 29,065 deaths [3] . severe acute respiratory distress develops in about 16% to 26% of patients hospitalized with covid-19, thus requiring oxygen supplementation and/or intensive care [4] . as the number of cases grows worldwide, in order to avoid a surge of demand on the healthcare system and shortages of equipment such as ventilators needed to care for critically ill patients [5] [6] [7] , many countries have imposed quarantine and recommended physical distancing to reduce transmission to people likely to have a severe illness (i.e., older patients and those with chronic comorbidities). those individuals with chronic comorbidities should also, in return, avoid contacts and/or use appropriate measures to prevent potential infection. yet, in france and around the world, specific advice for individuals with chronic conditions and their households is scarce with most information is intended for the general public. for example, information from the european centre for disease prevention and control refer to only "people with chronic diseases" without specifying specific groups of individuals. this was confirmed by a recent study showing that adults with comorbid conditions lacked critical knowledge about covid-19 [8] . in this study, we aimed to understand the perceptions, context and attitudes toward covid-19 of individuals with chronic conditions in order to clarify their potential risk of infection. this study was a cross-sectional survey nested in compare, a nationwide e-cohort of patients with chronic conditions in france [9] . participants were adults with chronic conditions recruited from the community of patients for research (compare, http://compare.aphp.fr), a nationwide e-cohort of patients with chronic conditions in france. participants of compare are adults (>18 years old) who reported having at least one chronic condition (defined as a condition requiring healthcare for at least 6 months) and who joined the project to donate time to accelerate research of their conditions by answering regular patient-reported outcomes and experience measures online [9] . all participants provide electronic informed consent before participating in the ecohort. compare was approved by the comité de protection des personnes ile de france 1 (irb: 0008367). all methods were performed in accordance with the relevant guidelines and regulations. data from this study were collected between march 23 and april 2, 2020 at the peak of the french epidemic. during that time, 27 475 new cases of covid-19 were confirmed, with a total of 56 261 cases on april 2, 2020. this time period includes the maximum number of daily cases in france (april 1, 2020) [10] . since march 17, france had been under lockdown (movement restrictions and closure of non-essential businesses), and people with chronic conditions were encouraged to stay at home [11] . during this time, knowledge of covid-19 was still limited and information for the public was imprecise. for example, information available on the website of the french ministry of health referred to "people at risk", mixing older people and patients with chronic conditions [12] . of note, at the time of the study, benefits of using face masks were debated in france and in europe. participants' demographic and clinical data were collected as part of their participation in the compare e-cohort. all variables are updated yearly. conditions and medications are selfreported by patients by using the international classification of primary care-version 2 [13] and the thesorimed database of medications (french database of medications developed by the national health insurance) [14] . in addition, participants answered a dedicated survey designed by vtt and pr by use of the literature and their own expertise [8] . it was then face-validated by two other researchers (ip and cr) with expertise in questionnaire development before dissemination. the questionnaire was not tested with patients; however, the first respondents provided comments in a dedicated open-ended question at the end of the questionnaire, which led to minor reformulations. final survey questions are available in s1 and s2 data. this survey covered 3 topics. • for perception of risk of severe illness with covid-19, we asked participants whether they felt at high risk of severe illness with covid-19 with the question: "do you feel at increased risk of severe illness with covid-19 as compared to people of the same age as you but without chronic disease?" (yes/no). • for their context. participants described their activity (e.g., whether they continued working outside of the home); their household (i.e., whether any member of their household worked outside of the home and were in contact with the public); and their recent physical visits to healthcare professionals. • for their attitudes to prevent infection, participants were presented four theoretical situations involving different types of contacts: frequent (e.g., family member frequently visiting, child care, etc.) or occasional (e.g., during shopping) and discerning whether these contacts showed symptoms or not. in each situation, participants reported whether they would refuse contact, enact physical distancing or wear personal protective equipment (mask, gloves, etc.). results of the survey were described globally and for the subgroup of patients considered at high risk of a severe illness according to the french high council for public health (box 1). these patients were those with a severe cardiac or vascular disease (high blood pressure with complications, history of stroke or ischemic heart disease, cardiac surgery, heart failure), insulin-dependent diabetes, chronic lung disease or lung disease likely to be exacerbated by a viral infection, chronic kidney disease under dialysis, cancer under treatment, immunodeficiency (due to a drug [cancer chemotherapy, immunosuppressive medications, biotherapy and/or corticosteroids], an uncontrolled hiv infection, transplantation, or cancer), liver cirrhosis, or severe obesity (body mass index [bmi] >40 kg/m 2 ), or pregnant in the third trimester [15] . to operationalize these criteria with the data available in compare, one physician (vtt) matched the conditions and treatments reported by patients in compare with the list of high-risk conditions and treatments presented above. cancer chemotherapy immunosuppressive medications, biotherapy and corticosteroids were those classified as such in manufacturers' prescribing information, by using the vidal dictionary (https://www.vidal.fr/classifications/vidal/). descriptive statistics (mean with sd and frequency with percentage) were calculated for all patient characteristics and survey responses. associations between participant characteristics and responses to the survey items were then examined in bivariate analyses by chi-square or t test, as appropriate. in addition, we fitted two logistic regressions aimed at exploring the association between participants' characteristics and 1) their perception of their risk for severe infection and 2) their attitudes to prevent infection with occasional contacts with asymptomatic people. variables included in the model were sex, age (as a continuous variable), household with > 1 person (including the patient), low educational level, smoking status (current smoker vs. others), treatment considered at risk according to the french high council for public health, bmi �40 kg/m 2 , high blood pressure, diabetes (under insulin treatment or not), history of stroke or cardiac ischemic disease, heart failure (any new york heart association stage), asthma, chronic obstructive pulmonary disease, thyroid disease, chronic kidney failure (under dialysis or not), cancer (under treatment or not) and osteoarthritis. analyses were performed on complete cases only. p < 0.05 was considered statistically significant. no corrections for multiple testing were performed. • third trimester of pregnancy analyses involved using a weighted dataset obtained by calibration on margins with weights for age categories (<24, 25-34, 35-44, 45-54, 55-64, 65-74, >75 years), sex and educational level (low, middle school or equivalent, high school or equivalent, associate's degree, higher education). weights were derived from national census data describing the french population reporting chronic conditions [16, 17] . analyses involved use of r v3.6.1 (http://www.r-project.org, the r foundation for statistical computing, vienna, austria). between march 23 and april 2, 2020, we invited 18,651 patients from compare to complete our survey and 7169 (38.4%) answered (s1 fig). participants were mostly female (5616 [78.3%]) with mean (sd) age 46.1 (14.7) years. in the non-weighted data, diseases most frequently reported were high blood pressure (11.6%), diabetes (7.1%), asthma (6.2%) and cancer (5.2%); 3684 (51.4%) participants reported �2 chronic conditions. differences between respondents and non-respondents are shown in s1 table. in the weighted sample, 39.4% were at high risk for a severe illness according to the french recommendations: 33.0% because of their conditions, 8.8% because of their treatments, 1.9% with bmi > 40 kg/m 2 , and 0.5% in their third trimester of pregnancy. patients' characteristics before and after weighting are presented in table 1 . in the weighted sample, 63% of participants felt at risk of severe illness with covid-19, of whom 51% (32% of the whole sample) reported a high-risk situation according to the french high council for public health. conversely, 37% participants did not feel at risk of severe covid-19, of whom 20% (7.4% of the whole sample) reported a high-risk situation according to the french high council for public health (fig 1) . patients' characteristics associated with a perceived risk of severe covid-19 identified in the logistic regressions are presented in table 2 . in total, 7041 (98%) participants answered the survey section regarding their risk of infection due to their context. risk of infection involved working outside of the home (8.8% of participants, of whom 29% were care professionals), visits to health facilities for a consultation or test (54.7% of participants) or to a pharmacy (82% of participants); their household (74.9% of participants lived with at least on other person, of whom 18% worked outside of the home and 13% were children < 15 years old), or regular contacts with people outside of their home (e.g., family, friends, housekeeping, child care, etc.) (5% of participants). in all, 23.7% were exposed to some risks because of their work, their household members working outside of the home, or regular visits from external contacts. among patients at high risk of a severe illness according to the french high council for public health, 5% continued working, 15% had a household member working outside of the home and 7% reported regular contacts with people outside of their home. in all, 21.1% were exposed to some risks because of their work, their household members working outside of the home, or regular visits from external contacts. in total, 6940 (97%) participants answered the survey section regarding their attitudes to prevent infections. independent of the type of contact, participants reported that they would enact physical distancing under all situations presented to them. about one quarter of patients would refuse any contact with symptomatic people (17.8% and 23.4% for occasional and covid-19-related perceptions, context and attitudes of adults with chronic conditions frequent contacts, respectively). concerning the use of personal protective equipment, use of masks ranged from 19% (for occasional contacts with asymptomatic people) to 65% (for frequent contacts with symptomatic people). similarly, use of gloves ranged from 19% (for occasional contacts with asymptomatic people) to 50% (for frequent contacts with symptomatic people) (fig 2) . we found similar results in the subgroup of patients at high risk of a severe illness according to the french high council for public health. only 18.2% and 23.2% patients would refuse contact with symptomatic people for occasional and frequent contacts, respectively. concerning the use of personal protective equipment, use of masks ranged from 30% (for occasional contacts with asymptomatic people) to 63% (for frequent contacts with symptomatic people). similarly, use of gloves ranged from 21% (for occasional contacts with asymptomatic people) to 44% (for frequent contacts with symptomatic people). patients' characteristics associated with a perceived risk of severe illness with covid-19 identified on logistic regression are presented in table 3 . the only variable found associated with use of face masks with asymptomatic people (or refusal to see these people) was patients' perception of high risk of severe infection by covid-19 (odds ratio 1.93, 95% confidence interval 1.53-2.43). table 2 . association between participant characteristics and their perceived risk of severe covid-19. results of logistic regression analysis of complete cases, accounting for weights obtained after calibration on margins for sex, age categories and educational level by using data from a national census describing the french population self-reporting at least one chronic condition. odds ratio (95% confidence interval) we involved 7169 individuals with chronic conditions in a nationwide survey nested in an existing cohort and described their perception of risk of severe covid-19 and their potential risk of infection due to context and attitudes. first, our study highlighted that patients with chronic conditions have distorted perceptions of their risk of severe covid-19. among patients with criteria for high risk of severe covid-19 by the french high council for public health (40% of our sample), about 20% did not feel at risk and could therefore adopt incautious attitudes. this figure may even be conservative in light of recent works suggesting that all patients with hypertension, diabetes, cardiovascular disease, or chronic lung disease are at risk, not just those with complicated diseases [18] [19] [20] . data from the he chinese center for disease control and prevention showed increased case fatality rate among patients with preexisting comorbid conditions-10.5% for cardiovascular disease, 7.3% for diabetes, 6.3% for chronic respiratory disease, 6.0% for hypertension, and 5.6% for cancer [20] . especially, our findings highlight that patients with a bmi � 40 kg/m 2 or who smoked did not feel at risk nor took extra precautions when in contact with other people despite these two factors being associated with risk of severe complications and mortality from covid-19 [21, 22] . these results are of importance because of the confluence of two elements. first, preventing infection for people at risk of severe disease is difficult. in our study, 21.2% of patients at high risk of a severe illness according to french recommendations were in frequent contact with "the outside world" during the quarantine because of their work, their household members working outside of the home, or regular visits from external contacts. second, feeling at risk seems to be the major factor for using face masks with asymptomatic people. at the time of the table 3 . association between participant characteristics and the use of face masks for occasional contacts with asymptomatic people (or the refusal to see these people). results of logistic regression analysis of complete cases, accounting for weights obtained after calibration on margins for sex, age categories and educational level by using data from a national census describing the french population self-reporting at least one chronic condition. odds ratio (95% confidence interval) covid-19-related perceptions, context and attitudes of adults with chronic conditions study, it was still unknown that 40% to 80% of transmission events could occur from people who are presymptomatic or asymptomatic [23] . therefore, specific communication clearly identifying patients at risk for severe illness by covid-19 is mandatory. communication should also target the household of these patients because the rate of secondary transmission among household contacts of patients with sars-cov-2 infection was estimated at 30% [24] . our results are important given the cumulative amount of evidence showing that patients with chronic conditions, about 20 million individuals in france, are at increased risk of severe covid and death. in a small case series conducted at the beginning of the epidemic in china, among 102 patients hospitalized for covid-19, those with comorbidities (especially hypertension, diabetes, cardiovascular and respiratory diseases) were more likely to be hospitalized in intensive care units [25, 26] . similar findings were observed in europe. in a large case series of 4000 patients hospitalized in icus in italy, the highest risk of death was for patients with chronic obstructive pulmonary disease (adjusted hr [ahr] 1.68, 95% ci 1. 28-2.19 ) and type 2 diabetes (ahr 1.18, 95% ci 1.01-1.39) [27] . reasons underlying these findings are still unclear, with hypotheses related to meta-inflammation or use of angiotensin-converting enzyme inhibitors (aceis)/angiotensin receptor blockers (arbs) in these populations, despite recent controversial findings about this latter point [28, 29] . our study complements the literature on the awareness and attitudes of patients with chronic conditions related to covid-19. to date, most works have focused on the general public [30, 31] . knowledge and attitudes of patients with chronic conditions is unknown, apart from a study of 600 patients with chronic conditions in the united states that showed gaps in awareness and knowledge of covid-19 among patients with chronic conditions [8] . our findings confirm these results and provide details on individuals' risks associated with their context and their attitudes to prevent infection. this study has several limitations. first, all data were self-reported, with risk of desirability bias regarding their attitudes. second, individuals at high risk of severe illness with covid19 are not yet well known; recommendations from the french high council for public health are mostly based on case reports in china and precaution measures [15] . third, the response rate was relatively low (38%) owing to the short duration of data collection (10 days) and the sole use of e-mails for the invitation and reminders. yet, such response rate is consistent with the literature of online surveys for the general public [32, 33] . non-respondents were younger, less multimorbid and had fewer conditions considered at high risk according to recommendations than respondents. despite statistical weighting, results should be generalized with caution. in conclusion, we found that individuals with chronic conditions may have distorted perceptions of their risk of severe illness with covid-19. targeted communication may increase the use of personal protective equipment and prevent infection, which is fundamental because 20% of these individuals are exposed to infection because of their work, their household or regular visits from external contacts, despite quarantine. table. demographic characteristics of respondents and non-respondents to the survey (raw data). (docx) preventing a covid-19 pandemic world health organization. novel coronavirus (covid-19) situation 2020 clinical infectious diseases: an official publication of the infectious diseases society of america critical supply shortages-the need for ventilators and personal protective equipment during the covid-19 pandemic. the new england journal of medicine the effect of control strategies to reduce social mixing on outcomes of the covid-19 epidemic in wuhan, china: a modelling study. the lancet public factors associated with hospitalization and critical illness among 4,103 patients with covid-19 disease awareness, attitudes, and actions related to covid-19 among adults with chronic conditions at the onset of the u.s. outbreak: a cross-sectional survey. annals of internal medicine collaborative open platform e-cohorts for research acceleration in trials and epidemiology (cooperate) european centre for disease prevention and control. data on the geographic distribution of covid-19 cases worldwide: european centre for disease prevention and control emmanuel macron annonce une série de mesures 2020 page du ministère de la santé international classification of primary care avis provisoire: patients à risque de formes sé vères de covid-19 et priorisation du recours aux tests de diagnostic virologique institut national de la statistique et des etudes économiques. la macro sas calmar: institut national de la statistique et des etudes é conomiques direction de la recherche dé, de l'évaluation et des statistiques. l'é tat de santé de la population en france-rapport 2017. paris: ministère des solidarité s et de la santé covid-19: risk factors for severe disease and death clinical research ed). 2020; 368:m1091. epub 2020/03/29. pmid: 32217556 www.icmje.org/coi_disclosure.pdf and declare: support from the tongji hospital for pilot scheme project and the chinese national thirteenth five years project in science and technology, national commission of health, people's republic of china, for the submitted work; no financial relationships with any organisation that might have an interest in the submitted work in the previous three years characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: summary of a report of 72 314 cases from the chinese center for disease control and prevention severity and mortality associated with copd and smoking in patients with covid-19: a rapid systematic review and meta-analysis www.icmje.org/coi_disclosure.pdf and declare: support from the kenneth c griffin charitable fund for submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous three years substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov2) household transmission of sars-cov-2 clinical features and short-term outcomes of 18 patients with corona virus disease 2019 in intensive care unit clinical infectious diseases: an official publication of the infectious diseases society of america risk factors associated with mortality among patients with covid-19 in intensive care units in lombardy, italy obesity, and metabolic inflammation create the perfect storm for covid-19 dr torp-pedersen reported receiving grants from bayer and novo nordisk. dr køber reported receiving speaker honoraria from novartis us public concerns about the covid-19 pandemic from results of a survey given via social media adoption of personal protective measures by ordinary citizens during the covid-19 outbreak in japan response rate and completeness of questionnaires: a randomized study of internet versus paper-and-pencil versions challenges using online surveys in a danish population of people with type 2 diabetes the authors thank isabelle pane and carolina riveros for their help in the survey development and isabelle pane for data management. key: cord-125330-jyppul4o authors: crokidakis, nuno; sigaud, lucas title: modeling the evolution of drinking behavior: a statistical physics perspective date: 2020-08-24 journal: nan doi: nan sha: doc_id: 125330 cord_uid: jyppul4o in this work we study a simple compartmental model for drinking behavior evolution. the population is divided in 3 compartments regarding their alcohol consumption, namely susceptible individuals s (nonconsumers), moderated drinkers m and risk drinkers r. the transitions among those states are rules by probabilities. despite the simplicity of the model, we observed the occurrence of two distinct nonequilibrium phase transitions to absorbing states. one of these states is composed only by susceptible individuals s, with no drinkers ($m=r=0$). on the other hand, the other absorbing state is composed only by risk drinkers r ($s=m=0$). between these two steady states, we have the coexistence of the three subpopulations s, m and r. comparison with abusive alcohol consumption data for brazil shows a good agreement between the model's results and the database. epidemic models have been widely used to study contagion processes such as the spread of infectious diseases [1] and rumors [2] . this kind of model has also been used for the spread of social habits, such as the smoking habit [3] , cocaine [4] and alcohol consumption [5] , obesity [6] , corruption [7] , cooperation [8] , ideological conflicts [9] , and also to other problems like rise/fall of ancient empires [10] , dynamics of tax evasion [11] and radicalization phenomena [12] . the main reason such social behaviors can be modelled by contagion processes is the response by elements of the ensemble to the social context of the studied subject. both social or peer pressure and positive reinforcement from other agents, regardless if the behavior brings positive or negative consequences to the individual, can influence each one's way of life. therefore, models for the epidemics of infectious diseases are also able to describe the spread of such tendencies, like alcoholism [13, 14] . the standard medical way of categorizing alcohol consumption [15] is in three groups -nonconsumers, moderate (or social) consumers and risk (or excessive) consumers; thus, modeling of the interactions and consequent changes of an individual from one group to another is governed by interaction parameters. one interesting aspect that should be taken into consideration when modeling alcohol consumption is the tendency on some individuals to gradually increase their consumption rate, not due to social susceptibility, but when under stressful or depressing circumstances, since alcohol plays a major role both as cause and consequence of depression, for instance [16] . this means that one can attach a probability of a moderate drinker to become an excessive drinker that is dependent only on the actual moderate drinkers population size, instead of the two population groups involved in the change. if one considers the current world situation with the recent coronavirus disease 2019 outbreak (covid-19), this self-induced increase in alcohol consumption is not only realistic, but also becomes more prominent -this has been observed in a myriad of studies this year detailing the consequences and dangers of both alcohol withdrawal (in places where it has become harder to legally acquire alcohol during the pandemic) and alcohol consumption increase [17, 18, 19, 20] . this work is organized as follows. in section 2, we present the model and define the microscopic rules that lead to its dynamics. the analytical and numerical results are presented in section 3, including comparisons with brazil's alcohol consumption data for a range of eleven years, used as a case study in order to evaluate the present model. finally, our conclusions are presented in section 4. our model is based on the proposal of references [5, 13, 14, 21, 22, 23, 24, 25] that treat alcohol consumption as a disease that spreads by social interactions. in such case, we consider an epidemic-like model where the transitions among the compartments are governed by probabilities. in this work we consider population homogeneous mixing, i.e., a fully-connected population of n individuals. this population is divided in 3 compartments, namely: • s: nonconsumer individuals, individuals that have never consumed alcohol or have consumed in the past and quit. in this case, we will call them susceptible individuals, i.e., susceptible to become drinkers, either again or for the first time; • m: nonrisk consumers, individuals with regular low consumption. we will call them moderated drinkers; • r: risk consumers, individuals with regular high consumption. we will call them risk drinkers; to be precise, a moderated drinker is a man who consumes less than 50 cc of alcohol every day or a woman who consumes less than 30 cc of alcohol every day. on the other hand, a risk drinker is a man who consumes more than 50 cc of alcohol every day or a woman who consumes more than 30 cc of alcohol every day [5] 1 . since we are considering a contagion model, the probabilities related to changes in agents' compartments represent the possible contagions. the transitions among compartments are as following: in the above rules, β represents an "infection" probability, i.e., the probability that a consumer (m or r) individual turns a nonconsumer one into drinker. the risk drinkers r can also "infect" the moderated m agents and turn them into risk drinkers r, which occurs with probability δ. these two infections occur by contagion, in our model, where individuals belonging to a group with a higher degree of consumption can influence others to drink more via social contact. this transition m → r can also occur spontaneously, with probability α, if a given agent increase his/her alcohol consumption -this is the only migration pathway from one group to another, in this model, that does not depend on the population of the receiving compartment, since it corresponds to a self-induced progression from moderate (m) to risk (r) drinking. as stated in the introduction, above, the increase of alcohol consumption has been documented to occur under stressful circumstances (like the covid-19 pandemic) or clinical depression, regardless of social interaction with risk drinkers. finally, the probability γ represents the infection probability that turn risk drinkers r into susceptible agents s. in this case, it can represent the pressure of social contacts (family, friends, etc) over individuals that drink excessively. we did not take into account transitions from risk (r) to moderate (m), assuming that, as a rule, once an individual reaches a behavior of excessive consumption of alcohol, contact with moderate drinkers does not imply on a tendency to lower one's consumption -meanwhile, it is assumed that contacts that do not drink at all are able to exert a higher pressure on them to quit drinking. it is not that the from risk to moderate transition cannot occur -it is just that for our model this probability, when comparing it with the overall picture, is negligible. for simplicity, we consider a fixed population, i.e., at each time step t we have the normalization condition s(t) + m(t) + r(t) = 1, where we defined the population densities s(t) = s(t)/n, m(t) = m(t)/n and r(t) = r(t)/n. since we will only deal with the relative proportions among the three different groups in relation to the total population n, i.e. the population densities, we will not take into account birth-mortality relations and populational increase/decrease effects. so, even if n is not a constant number, for all mod-elling purposes it will not matter due to the fact that we will deal only with the s(t), m(t) and r(t) subpopulations in relation to the total population. one other way of looking at this approximation is to consider only the adult population as relevant to our modelling, and assume that new individuals coming of age correspond to the number of deaths [24, 25] . based on the microscopic rules defined in the previous subsection, one can write the master equations that describe the time evolution of the densities s(t), m(t) and r(t) as follows, and we also have the normalization condition valid at each time step t. first of all, one can analyze the early evolution of the population, for small times. considering the initial conditions s(0) ≈ 1, m(0) ≈ 1/n and r(0) = 0, one can linearize eq. (2) to obtain that can be directly integrated to obtain m(t) = m 0 e α(r 0 −1) t , where m 0 = m(t = 0) and one can obtain the expression for the basic reproduction number as it is usual in epidemic models [1, 27] , the disease (alcoholism) will persist in the population if r 0 > 1, i.e., for β > α. one can start analyzing the time evolution of the three classes of individuals. we numerically integrated eqs. (1), (2) and (3) the effects of the variation of the model's parameters. as initial conditions, we considered s(0) = 0.99, m(0) = 0.01 and r(0) = 0, and for simplicity we fixed α = 0.03 and δ = 0.07, varying the parameters β and γ. in fig. 1 (a) , (b) and (c) we exhibit results for fixed β = 0.07 and typical values of γ. one can see that the increase of γ causes the increase of s and the decrease of m and r. remembering that γ models the persuasion of nonconsumers s in the social interactions with risk drinkers r, i.e., the social pressure of individuals that do not consume alcohol over their contacts (friends, relatives, etc) that consume too much alcohol. on the other hand, in fig. 1 (d) we considered γ = 0.15 and β = 0.07. for this case, where we have β > γ, we see that the densities evolve in time, and in the steady states we observe the survival of only the risk drinkers, i.e., for t → ∞ we have s = m = 0 and r = 1. this last result will be discussed in more details analytically in the following. as we observed in fig. 1 , the densities s(t), m(t) and r(t) evolve in time, and after some time they stabilize. in such steady states, the time derivatives of eqs. (1) -(3) are zero. in the t → ∞ limit, eq. (1) gives us (−β m − β r + γ r) s = 0, where we denoted the stationary values as s = s(t → ∞), m = m(t → ∞) and r = r(t → ∞). this last equation has two solutions, one of them is s = 0 and from the other solution we can obtain a relation between r and m, considering now the limit t → ∞ in eq. (2), one obtains β s r = (α + δ r − β s) m . if the obtained solution s = 0 is valid, this relation gives us m = 0 and consequently from (4) we have r = 1. this solution represents an absorbing state [28, 29] , since the dynamics becomes frozen due to the absence of s and m agents. we will discuss this solution in more details in the following. considering now the relation (7) and the normalization condition (4), one can obtain substituting (9) and (7) in (8) considering this result (10) in eqs. (9) and (7) we obtain, respectively the obtained eqs. (10) -(12) represent a second possible steady state solution of the model, that is a realistic solution since the three fractions s, m and r coexist in the population. we can look to eq. (10) in more details. it can be rewritten in the critical phenomena perspective as [30, 31] where the critical points are given by the competition among the contagions cause the occurrence of such three regions in the model. from one side we have drinkers (moderated and risk) influencing nonconsumers to consume alcohol, with probability β. on the other hand, we have the social pressure of nonconsumers over risk drinkers, with probability γ, in order to make such alcoholics to begin treatment and stop drinking. finally, it is important to mention the parameter α, that drives the only transition of the model that does not depend on a direct social interaction. that parameter models the spontaneous increase of alcohol consumption, and it is also responsible for the first phase transition (together with γ), since we have β (1) c = 0 for α = 0. it means that the alcohol consumption (the "disease") cannot be eliminated of the population after a long time if there is a spontaneous increase of alcohol consumption from individuals that drink moderately, which is a realistic feature of the model. for clarity, we exhibit in fig. 3 the phase diagram of the model in the plane β versus γ, separating the three above discussed regions. in fig. 3 , the absorbing phase with s = 1 and m = r = 0 is located in region i for β < β (1) c , the coexistence phase is denoted by ii for β (where the three densities coexist) and the other absorbing phase where s = m = 0 and r = 1 is located in region iii. from this figure we see the mentioned competition among the contagions. indeed, if β is sufficiently high, many nonconsumers become moderated drinkers. such moderated drinkers will become risk drinkers (via probabilities α and δ), and in the case of small γ we will observe after a long time the disappearance of nonconsumers and moderated drinkers (region iii). in the opposite case, i.e., for high γ and small β, the flux into the compartment s is intense, and in the long-time limit the other two subpopulations m and r disappears (region i). finally, for intermediate values of β and γ the competition among the social interactions lead to the coexistence of the three subpopulations in the stationary states (region ii). it is worthwhile to mention that the sizes of regions i and ii are directly dependent on probability α, while region iii is always fixed due to eq. (18) . this means that, if parameter α is increased, region i will become gradually larger, which is an indication that the spontaneous evolution from moderate to risk drinking behavior increases the latter's absorbing state. in consequence, since probability α represents a percentage of moderate drinkers that become risk drinkers without the need for social interaction, it is a crucial factor not only to implement the theoretical model but also to identify a possible percentage of the population that has a natural tendency to present excessive alcohol consumption behavior, regardless of their social interaction network. for fig. 3 , for instance, this value is 3%. larger values of α narrow the set of parameters that can be chosen in order to realistically describe a real system. finally, we compare the model's results with data of drinking abusive consumption in brazil [26] . data were collected from 2009 to 2019, thus in fig. 4 the initial time t = 0 represents the fraction of abusive drinkers for 2009, t = 1 represents the fraction for 2010, and so on. since the data is for the fraction of people that consume alcohol abusively, we plot the density of risk drinkers r(t) together with the data. in order to compare them with the model, we considered for the initial density of risk drinkers r(0) = 0.185 and numerically integrated eqs. (1) -(3). the value 0.185 was chosen since it is the fraction of abusive drinkers for 2009 obtained from the database [26] . in addition, we rescaled the time of the simulation results to match the time of real data: such simulation time was multiplied by 0.12 for a better comparison. we find that the simulated drinking trajectories qualitatively correspond to the data. for the numerical results, we considered the parameters β = 0.06, γ = 0.11, α = 0.047 and δ = 0.2, which indicate that the probability of finding an individual that will spontaneously become a risk drinker in brazil during the last decade is around 4,7%. furthermore, looking at eqs. (17) and (18) , it is easy to see that in order to model brazil's c < β < β (2) c , showing that the model describes the available data in its most realistic spectrum (region ii of figure 3 ). naturally, in comparison with actual data, models should always present the three different phases, i.e. coexistence between the three different population groups, since descriptions with only nonconsumers or risk drinkers are unrealistic. this qualitative agreement with brazil's database in the realistic spectrum of the model points to a good, albeit simplistic, modelling. in this work, we have studied a compartmental model that aims to describe the evolution of drinking behavior in an adult population. we considered a fully-connected population that is divided in three compartments, namely susceptible individuals s (nonconsumers), moderated drinkers m and risk drinkers r. the transitions among the compartments are ruled by probabilities, representing the social interactions among individuals, as well as spontaneous decisions, in particular from moderate evolving into risk drinkers, and we studied the model through analytical and numerical calculations. from the theoretical point of view, the model is of interest of statistical physics since we observed the occurrence of two distinct nonequilibrium phase transitions. these transitions separate the model in three regions: (i) existence of nonconsumers only; (ii) coexistence of the three compartments and (iii) existence of risk drinkers only. regions i and iii represent two distinct absorbing phases, since the system becomes frozen due to the existence of only one subpopulation for each case -this means that, in order to describe real populational systems, the parameters must be chosen so that the model falls in region ii, since populations consisting solely of nonconsumers or risk drinkers do not represent a realistic entity. the critical points of such transitions were obtained analytically. a comparison with available data for brazil's extreme alcohol consumption for the past decade shows a good qualitative agreement with the model, with the chosen parameters framed within its realistic boundaries. it will be important in a couple of years time to re-evaluate these results in the light of new data comprising years 2020 and 2021, in order to verify the direct effects of the covid-19 pandemic in the brazilian population's alcohol consumption. an hypothesis to be tested is a possible increase in parameter α combined with a corresponding decrease in the other parameters, corresponding to social interactions. the phase transitions observed in the model are active-absorbing phase transitions, and the predicted critical exponent for the order parameter is 1 (m ∼ (β−β c ) 1 ) as in the mean-field directed percolation, that is the prototype of a phase transition to an absorbing state [30, 31] . it would be interesting to estimate numerically other critical exponents of the model, as well as to simulate it in regular d-dimensional lattices (e.g. square and cubic) in order to obtain all the critical exponents. this is important to define precisely the universality class of the model, as well as its upper critical dimension. this extension is left for a future work. furthermore, it can also be considered the inclusion of heterogeneities in the population, like agents' conviction [32] , time-dependent transition rates [33] , inflexibility [34] , etc. the mathematical theory of infectious diseases and its applications, charles griffin & company ltd, 5a crendon street, high wycombe, bucks hp13 6le epidemics and rumours analysing the spanish smoke-free legislation of 2006: a new method to quantify its impact using a dynamic model predicting cocaine consumption in spain: a mathematical modelling approach alcohol consumption in spain and its economic cost: a mathematical modeling approach modeling the obesity epidemic: social contagion and its implications for control can honesty survive in a corrupt parliament? evolution of tag-based cooperation on erdős-rényi random graphs encouraging moderation: clues from a simple model of ideological conflict the dynamics of the rise and fall of empires dynamics of tax evasion through an epidemic-like model modeling radicalization phenomena in heterogeneous populations social epidemiology and complex system dynamic modelling as applied to health behaviour and drug use research agent-based modeling of drinking behavior: a preliminary model and potential applications to theory and practice world health statistics 2018: monitoring health for the sdgs, sustainable development goals the prevalence and impact of alcohol problems in major depression: a systematic review bilinski, alcohol consumption reported during the covid-19 pandemic: the initial stage alcohol use and misuse during the covid-19 pandemic: a potential public health crisis? complicated alcohol withdrawalan unintended consequence of covid-19 lockdown alcohol use in times of the covid 19: implications for monitoring and policy modelling alcohol problems: total recovery mohyud-din, a conformable mathematical model for alcohol consumption in spain dynamics of an alcoholism model on complex networks with community structure and voluntary drinking modeling binge drinking drinking as an epidemica simple mathematical model with recovery and relapse, in: therapist's guide to evidence-based relapse prevention vigitel brasil 2019 : vigilncia de fatores de risco e proteo para doenas crnicas por inqurito telefnico : estimativas sobre frequncia e distribuio sociodemogrfica de fatores de risco e proteo para doenas crnicas nas capitais dos 26 estados brasileiros e no distrito federal em 2019 covid-19 spreading in rio de janeiro, brazil: do the policies of social isolation really work? survival of the scarcer in space symbiotic two-species contact process nonequilibrium phase transitions in lattice models non-equilibrium critical phenomena and phase transitions into absorbing states competition among reputations in the 2d sznajd model: spontaneous emergence of democratic states critical behavior of the sis epidemic model with time-dependent infection rate inflexibility and independence: phase transitions in the majority-rule model the authors thank ronald dickman for some suggestions. financial support from the brazilian scientific funding agencies cnpq (grants 303025/2017key: cord-017351-73hlwwdh authors: quarantelli, e. l.; boin, arjen; lagadec, patrick title: studying future disasters and crises: a heuristic approach date: 2017-09-12 journal: handbook of disaster research doi: 10.1007/978-3-319-63254-4_4 sha: doc_id: 17351 cord_uid: 73hlwwdh over time, new types of crises and disasters have emerged. we argue that new types of adversity will continue to emerge. in this chapter, we offer a framework to study and interpret new forms of crises and disasters. this framework is informed by historical insights on societal interpretations of crises and disasters. we are particularly focused here on the rise of transboundary crises – those crises that traverse boundaries between countries and policy systems. we identify the characteristics of these transboundary disruptions, sketch a few scenarios and explore the societal vulnerabilities to this type of threat. we end by discussing some possible implications for planning and preparation practices. disasters and crises are as old as when human beings started to live in groups. through the centuries, new types have emerged. for instance, the development of synthetic chemicals in the 19th century and nuclear power in the 20th century created the possibility of toxic chemical disasters and crises from radioactive fallouts. older crisis types did not disappear: ancient types such as floods and earthquakes remain with us. the newer disasters and crises are additions to older forms; they recombine elements of old threats and new vulnerabilities. the literature on crisis and disaster research suggests that we are at another important historical juncture with the emergence of a new distinctive class of disasters and crises not often seen before (ansell, boin, & keller, 2010; helsloot, boin, jacobs, & comfort, 2012; tierney, 2014) . in this chapter, we discuss the rise of transboundary crises and disasters. we seek to offer a heuristic approach to studying these new crises and disasters. we offer a heuristic approach to understanding the disasters and crises of the future. it is presented primarily as an aid or guide to looking further into the matter, hopefully stimulating more investigation on conceptions of disasters and crises in the past, the present, and the future. unlike in some areas of scientific inquiry, where seemingly final conclusions can be reached (e.g., about the speed of light), the basic nature of the phenomenon we are discussing is of a dynamic nature and subject to change through time. the answer to the question of what is a disaster or crisis has evolved and will continue to do so (see perry' s chapter in this handbook). human societies have always been faced with risks and hazards. earthquakes, hostile inter-and intra-group relationships, massive floods, sudden epidemics, threats to take multiple hostages or massacre large number of persons, avalanches, fires and tsunamis have marked human history for centuries if not eons. disasters and crises requiring a group reaction are as old as when human beings started to live in stable communities. 1 the earliest happenings are attested to in legends and myths, oral traditions and folk songs, religious accounts and archeological evidence from many different cultures and subcultures around the world. for example, a "great flood" story has long existed in many places (lang, 1985) . as human societies evolved, new threats and hazards emerged. to the old there have been added new dangers and perils that increasingly have become potentially dangerous to human groups. risky technological agents have been added to natural hazards. these involve chemical, nuclear and biological threats that can accidentally materialize as disasters. intentional conflict situations have become more damaging at least in the sense of involving more and more victims. the last 90 years have seen two world wars, massive air and missile attacks by the military on civilians distant from battle areas, many terrorist attacks, and widespread ethnic strife. genocide killed one million persons in rwanda; millions have become refugees and tens of thousands have died in darfur in the sudan in africa. while terrorism is not a new phenomenon, its targets have considerably expanded. some scholars and academics have argued that the very attempt to cope with increasing risks, especially of a technological nature, is indirectly generating new hazards. as the human race has increasingly been able to cope with such basic needs as food and shelter, some of the very coping mechanisms involved (such as the double edged consequences of agricultural pesticides), have generated new risks for human societies (beck, 1999; perrow, 1999) . for example, in 2004, toxic chemicals were successfully used to eradicate massive locust infestations affecting ten western and northern african countries. those very chemicals had other widespread negative effects on humans, animals and crops (irin, 2004) . implicit in this line of thinking is the argument that double-edged consequences from new innovations (such as the use of chemicals, nuclear power and genetic engineering) will continue to appear (tenner, 1996) . we cannot say that the future will bring more disasters, as we have no reliable statistics on prior happenings as a base line to use in counting (quarantelli, 2001) . at present, it would seem safer to argue that some future events are qualitatively different, and not necessarily that there will be more of them in total (although we would argue the last is a viable hypothesis that requires a good statistical analysis). societies for the most part have not been passive in the face of these dangers to human life and well-being. this is somewhat contrary to what is implicit in much of the social science literature especially about disasters. in fact, some of these writings directly or indirectly state that a fatalistic attitude prevailed in the early stages of societal development (e.g., quarantelli, 2000) . this was thought because religious beliefs attributed negative societal happenings to punishments or tests 1 this seems to have occurred about five to six thousand years ago (see lenski, lenski, & nolan, 1991) . however, recent archeological studies suggest that humans started to abandon nomadic wanderings and settled into permanent sites around 9,500 years ago (balter, 2005) so community recognized disasters and crises might have an even longer history. by supernatural entities (the "acts of god" notion, although this particular phrase became a common usage mostly because it served the interests of insurance companies). but prayers, offerings and rituals are widely seen as means to influence the supernatural. so passivity is not an automatic response to disasters and crises even by religious believers, an observation sometimes unnoticed by secular researchers. in fact, historical studies strongly indicate that societal interpretations have been more differentiated than once believed and have shifted through the centuries, at least in the western world. in ancient greece, aristotle categorized disasters as the result of natural phenomena and not manifestations of supernatural interventions (aristotle, 1952) . the spread of christianity about 2,000 years ago helped foster the belief that disasters were "special providences sent directly" from "god to punish sinners" (mulcahy, 2002, p. 110) . in the middle ages, even scholars and educated elites "no longer questioned the holy origins of natural disasters" (massard-guilbaud, platt, & schott, 2002, p. 19) . starting in the 17th century, however, explanations started to be replaced by "ones that viewed disasters as accidental or natural events" (mulcahy, 2002, p. 110) . this, of course, also reflected a strong secularization trend in western societies. perhaps this reached a climax with the 1755 lisbon earthquake which dynes notes can be seen as the "first modern disaster" (2000, p. 10). so far our discussion has been mostly from the perspective of the educated elites in western societies. little scholarly attention seems to have been given to what developed in non-western social systems. one passing observation about the ottoman empire and fire disasters suggests that the pattern just discussed might not be universal. thus, while fire prevention measures were encouraged in cities, they were not mandated "since calamities were considered" as expressions of the will of god (yerolympos, 2002, p. 224) . even as late as 1826 an ottoman urban building code stated that according to religious writing "the will of the almighty will be done" and nothing can and should be done about that. at the same time, this code advances the idea that nevertheless there were protective measures that could be taken against fires that are "the will of allah" (quoted in yerolympos, 2002, p. 226) . of course, incompatibility between natural and supernatural views about the world are not unique to disaster and crisis phenomena, but that still leaves the distinction important. 2 even recently, an australian disaster researcher asserted that in the 2004 southwestern asian tsunami most of the population seemed to believe that the disaster was "sent either as a test of faith or punishment" (mcaneney, 2005, p. 3). or as another writer noted, following the tsunami, religiously oriented views surfaced. some were by: "fundamentalist christians" who tend to view all disasters "as a harbinger of the apocalypse". others were by "radical islamists" who are inclined to see any disaster that "washes the beaches clear of half-nude tourists to be divine" (neiman, 2005, p. 16) . after hurricane katrina, some leaders of evangelical groups spoke of the disaster as punishment imposed by god for "national sins" (cooperman, 2005) . in the absence of systematic studies, probably the best hypothesis that should be researched is that at present religious interpretations about disasters and crisis still appear to be widely held, but relative to the past probably have eroded among people in general. the orientation is almost certainly affected by sharp cross-societal difference in the importance attributed to religion as can be noted in the religious belief systems and practices as currently exist in the united states and many islamic countries, compared to japan or a highly secular western europe. apart from the varying interpretations of the phenomena, how have societies behaviorally reacted to existing and ever-changing threats and risks? as a whole, human groups have evolved a 2 for an interesting attempt to deal with these two perspectives see the paper entitled disaster: a reality or a construct? perspective from the east, written by jigyasu (2005) an indian scholar. variety of formal and informal mechanisms to prevent and to deal with crises and disasters. but societies have followed different directions depending on the perceived sources of disasters and crises. responses tend to differ with the perception of the primary origin (the supernatural, the natural or the human sphere). for example, floods were seen long ago as a continuing problem that required a collective response involving engineering measures. stories that a chinese emperor, 23 centuries before christ, deepened the ever-flooding yellow river by massive dredging and the building of diversion canals may be more legend than fact (waterbury, 1979, p. 35) . however, there is clear evidence that in egypt in the 20th century bc, the 12th dynasty pharaoh, amenemher ii completed southwest of cairo what was probably history's first substantial river control project (an irrigation canal and dam with sluice gates). other documentary evidence indicates that dams for flood control purposes were built as far back as 1260 b c in greece (schnitter, 1994, p. 1, 8-9) . such mitigatory efforts indicate both the belief that there was a long-term natural risk as well as one that could be coped with by physically altering structural dimensions. later, particular in europe, there were many recurrent efforts to institute mitigation measures. for example, earthquake resistant building techniques were developed in ancient rome, although "they had been forgotten by the middle ages" (massard-guilbaud et al., 2002, p. 31) . the threats from floods and fires spurred mitigation efforts in greece. starting in the 15th century, developing urban areas devised many safeguards against fires, varying from regulations regarding inflammable items to storage of water for firefighting purposes. in many towns in medieval poland, dams, dikes and piles along riverbanks were built (sowina, 2002) . of course, actions taken were not always successful. but, if nothing else, these examples show that organized mitigation efforts have been undertaken for a long time in human history. there have been two other major behavioral trends of long duration that are really preventive in intent if not always in reality. one has been the routinization of responses by emergency oriented groups so as to prevent emergencies from escalating into disasters or crises. for example, in ancient rome, the first groups informally set up to fight fires were composed of untrained slaves. but when a fire in 6 a.d. burned almost a quarter of rome, a corps of vigiles was created that had full-time personnel and specialized equipment. in more recent times, there are good examples of this routinization in the planning of public utilities that have standardized operating procedures to deal with everyday emergencies so as to prevent them from materializing into disasters. in the conflict area, there are various un and other international organizations, such as the international atomic energy agency and the european union (eu), that also try to head off the development of crises. in short, societies have continually evolved groups and procedures to try to prevent old and new risks and threats from escalating into disasters and crises. a second more recent major trend has been the development of specific organizations to deal first with wartime crises and then with peacetime disasters. societies for about a century have been creating specific organizations to deal first with new risks for civilians created by changes in warfare, and then improving on these new groups as they have been extended to peacetime situations. rooted in civil defense groups created for air raid situations, there has since been the evolvement of civilian emergency management agencies (blanchard, 2004) . accompanying this has been the start of the professionalization of disaster planners and crisis managers. there has been a notable shift from the involvement of amateurs to educated professionals. human societies adjusted not only to the early risks and hazards, but also to the newer ones that appeared up to the last century. the very existence of the human race is testimony to the social coping mechanisms of humans as they face such threats. here and there a few communities and groups have not been able to cope with the manifestations of contemporary risks and hazards (diamond, 2005) . but these have been very rare cases. neither disasters nor crises involving conflict have had that much effect on the continuing existence of cities anywhere in the world. throughout history, many cities have been destroyed. they have been: "sacked, shaken, burned, bombed, flooded, starved, irradiated and poisoned", but in almost every case they have phoenix-like been reestablished (vale & campanella, 2004, p. 1) . around the world, from the 12th to the 19th century, only 42 cities were "permanently abandoned following destruction" (vale & campanella, 2004, p. 1) . the same analysis notes that large cities such as baghdad, moscow, aleppo, mexico city, budapest, dresden, tokyo, hiroshima and nagasaki all suffered massive physical destruction and lost huge numbers of their populations due to disasters and wartime attacks. all were rebuilt and rebounded. at the start of the 19th century, "such resilience became a nearly universal fact" about urban settlements around the world (vale & campanella, 2004, p. 1) . looking at these cities today as well as warsaw, berlin, hamburg and new orleans, it seems this recuperative tendency is very strong (see also schneider & susser, 2003) . in the hiroshima museum that now exists at the exact point where the bomb fell, there is a 360-degree photograph of the zone around that point, taken a few days after the attack. except for a few piles of ruins, there is nothing but rubble as far as the eye can see in every direction. there were statements made that this would be the scene at that location for decades. but a visitor to the museum today can see in the windows behind the circular photograph, many signs of a bustling city and its population (for a description of the museum see webb, 2006) . hiroshima did receive much help and aid to rebuild. but the city came back in ways that observers at the time of impact did not foresee. early efforts to understand and to cope with disasters and crises were generally of an ad hoc nature. with the strong development of science in the 19th century, there was the start of understanding the physical aspects of natural disasters, and these had some influence on structural mitigation measures that were undertaken. however, the systematic social science study of crises and disasters is about a half-century-old (fritz, 1961; kreps, 1984; quarantelli, 1988 quarantelli, , 2000 schorr, 1987; wright & rossi, 1981) . in short, there is currently a solid body of research-generated knowledge developed over the last half century of continuing and ever increasing studies around the world in different social science disciplines. to be sure, such accounts and reports are somewhat selective and not complete. there are now case studies and analytical reports on natural and technological disaster (and to some extent on other crises) numbering in the four figures. in addition, there are numerous impressions of specific behavioral dimensions that have been derived from field research (for summaries and inventories see alexander, 2000; cutter, 1994; dynes, demarchi, & pelanda, 1987; dynes & tierney, 1994; farazmand, 2001; helsloot, boin, jacobs, & comfort, 2012; mileti, 1999; oliver-smith, 1999; perry, lindell, & prater, 2005; rosenthal, boin, & comfort, 2001; rosenthal, charles, & 't hart, 1989; tierney, lindell, & perry, 2001; turner, 1978) . what are the distinctive aspects of the newer disasters and crises that are not seen in traditional ones? to answer this question, we considered what social science studies and reports had found about behavior in disasters and crises up to the present time. we then implicitly compared those observations and findings with the distinctive behavioral aspects of the newer disasters and crises. one issue that has always interested researchers and scholars is how to conceptualize disasters and crises. there is far from full agreement that all disasters and crises can be categorized together as being relatively homogeneous phenomena (quarantelli, 1998; perry & quarantelli, 2005) . this is despite the fact that there have been a number of attempts to distinguish between, among and within different kinds of disasters and crises. however, no one overall view has won anywhere near general acceptance among self-designated disaster and crisis researchers. to illustrate we will briefly note some of the major formulations advanced. for example, one attempt has been to distinguish between natural and technological disasters (erikson, 1994; picou & gill, 1996) . the basic assumption was that the inherent nature of the agent involved made a difference. implicit was the idea that technological dangers or threats present a different and more varying kind of challenge to human societies than do natural hazards or risks. most researchers have since dropped the distinction as hazards have come to be seen as less important than the social setting in which they appear. in recent major volumes on what is a disaster (quarantelli, 1998; perry & quarantelli, 2005) , the distinction was not even mentioned by most of the two dozen scholars who addressed the basic question. other scholars have struggled with the notion that there may be some important differences between what can be called "disasters" and "crises". the assumption here is that different community level social phenomena are involved, depending on the referent. thus, some scholars distinguish between consensus and conflict types of crises (stallings, 1988 tries to reconcile the two perspectives). in some research circles, almost all natural and most technological disasters are viewed as consensus types of crises (quarantelli, 1998) . these are contrasted with crises involving conflict such as are exemplified by riots, terrorist attacks, and ethnic cleansings and intergroup clashes. in the latter type, at least one major party is either trying to make it worse or to extend the duration of the crisis. in natural and technological disasters, no one deliberately wants to make the situation worse or create more damage or fatalities. now, there can be disputes or serious disagreements in natural or technological disasters. it is almost inevitable that there will be some personal, organizational and community conflicts as, for example, in the recovery phase of disasters, where scapegoating is common (bucher, 1957; drabek & quarantelli, 1967 cf. boin, mcconnell, & 't hart, 2008) . in some crises, the overall intent of major social actors is to deliberately attempt to generate conflict. in contrast to the unfolding sequential process of natural disasters, terrorist groups or protesting rioters not only intentionally seek to disrupt social life, they modify or delay their attacks depending on perceived countermeasures. apart from a simple observable logical distinction between consensus and conflict types of crises, empirical studies have also established behavioral differences. for example, looting behavior is distinctively different in the two types. in the typical disaster in western societies, almost always looting is rare, covert and socially condemned, done by individuals, and involves targets of opportunity. in contrast, in many conflict crises looting is very common, overt and socially supported, undertaken by established groups of relatives or friends, and involves deliberately targeted locations (quarantelli & dynes, 1969) . likewise, there are major differences in hospital activities in the two kinds of crises, with more variation in conflict situations. there are differences also in the extent to which both organizational and community-level changes occur as a result of consensus and conflict crises, with more changes resulting from conflict occasions (quarantelli, 1993) . finally, it has been suggested that the mass media system operates differently in terrorism situations and in natural and technological disasters (project for excellence in journalism, 1999 journalism, , 2001 . 3 both the oklahoma city bombing and the 9-11 world trade center attack led to sharp clashes between different groups of initial organizational responders. there were those who saw these happenings primarily as criminal attacks necessitating closure of the location as a crime 3 for a contrary view that sees terrorist occasions as more or less being the same as what behaviorally appears in natural and technological disasters (fischer, 2003) . scene, and those who saw them primarily as situations where priority ought to be on rescuing survivors. in the 9-11 situation, the clash continued later into the issues of the handling of dead bodies and debris clearance. all this goes to show that crises and disasters are socially constructed. whether it is by theorists, researchers, operational personnel, politicians or citizens, any designation comes from the construction process and is not inherent in the phenomena itself. this is well illustrated in an article by cunningham (2005) where he shows that a major cyanide spill into the danube river was differently defined as an incident, an accident, or a catastrophe, depending on how culpability was perceived and who was doing the defining. still other distinctions have been made. some advocate "crisis" as the central concept in description and analysis (see the chapter of boin, kuipers and 't hart in this handbook). in this line of thinking, a crisis involves an urgent threat to the core functions of a social system. a disaster is seen as "a crisis with a bad ending" (boin, 2005) . this is consistent with the earlier expressed idea that while there are many hazards and risks, only a few actually manifest themselves. but the crisis idea does not differentiate among the manifestations themselves as the consensus and conflict distinction does. this is not the place to try and settle conceptual disagreements and we will not attempt to do so. anyone in these areas of study should acknowledge that there are different views and different proponents should try to make their positions as explicit as possible so people do not continue to talk past one another. it is perhaps not amiss here to note that the very words or terms used to designate the core nature of the phenomena are etymologically very complex with major shifts in meaning through time. 4 we are far from having standardized terms and similar connotations and denotations for them. a conceptual question that has come increasingly to the fore in the last decade or so is the question: have new kinds of crises and disasters began to appear? we think it is fair to say that there are new types of risks and hazards. there are also structural changes in social settings. together, they raise the prospect of new types of disasters and crises. for example, we have seen the breakdown of modern transportation systems (think of the volcanic ash crisis that paralyzed air traffic in 2010; kuipers & boin, 2015) . there have been massive information system failures either through sabotage or as a result of technical breakdowns in linked systems. there have been terrorist attacks of a magnitude and scale not seen before. we are living with the prospect of widespread illnesses and health-related difficulties that appear to be qualitatively different from traditional medical problems. we have just lived through financial and economic collapses that cut across different social systems around the world. many of these "new" disruptions have both traditional and non-traditional features: think of the heat waves in paris (lagadec, 2004) and chicago (klinenberg, 2002) , the ice storms in canada (scanlon, 1998) , but also the genocide-like violence in africa and the former yugoslavia. the chernobyl radiation fallout (1986) led some scholars and researchers to start asking if there was not something distinctively new about that disaster. the fallout was first openly measured in sweden. officials were mystified in that they could not locate any possible radiation source in their own country. later radiation effects on vegetation eaten by reindeer past the arctic circle in northern sweden were linked to the nuclear plant accident in the soviet union. the mysterious origins, crossing of national boundaries, and the emergent involvement of 4 see safire (2005) who struggles with past and present etymological meanings of "disaster", "catastrophe", "calamity" and "cataclysm"; also see murria (2004) who looking outside the english language found a bewildering set of words used, many of which had no equivalent meanings in other languages. many european and transnational groups was not something researchers had typically seen together in other prior disasters. looking back, it is clear that certain other disasters also should have alerted all of us to the probability that new forms of adversity were emerging. in november 1986, water used to put out fire in a plant involving agricultural chemicals spilled into the river rhine. the highly polluted river went through switzerland, germany, france, luxembourg and the netherlands. a series of massive fire smog episodes plagued indonesia in 1997 and 1998. land speculations led to fire-clearing efforts that, partly because of drought conditions, resulted in forest fires that produced thick smog hazes that spread over much of southeast asia (barber & schweithelm, 2000) . these disrupted travel, which in turn affected tourism as well as creating respiratory health problems, and led to political criticism of indonesia by other countries as multi-nation efforts to cope with the problem were not very successful. both of these occasions had characteristics that were not typically seen in traditional disasters. in the original version of this chapter, we spoke about "trans-system social ruptures". this term was an extension of the earlier label of "social ruptures" advanced by lagadec (2000 lagadec ( , 2004 . the term "transboundary" has since become the more conventional way to describe crises and disasters that jump across different societal boundaries disrupting the social fabric of different social systems (ansell et al., 2010) . the two prime and initial examples we used in the original chapter were the severe acute respiratory syndrome (sars) and the sobig computer f virus spread, both of which appeared in 2003. the first involved a "natural" phenomenon, whereas the second was intentionally created. since there is much descriptive literature available on both, we here provide only very brief statements about these phenomena. the new infectious disease sars appeared in the winter of 2003. apparently jumping from animals to humans it originated in southern rural china, near the city of guangzhou. from there it moved through hong kong and southeast asia. it spread quickly around the world because international plane flights were shorter than its incubation period. at least 774 infected persons died. it hit canada with outbreaks in vancouver in the west and toronto far away in the east. in time, 44 persons died of the several hundred that got ill, and thousands of others were quarantined. the city's healthcare system virtually closed down except for the most urgent of cases with countless procedures being delayed or cancelled. the result was that there was widespread anxiety in the area resulting in the closing of schools, the cancellation of many meetings and, because visitors and tourists stayed away, a considerable negative effect on the economy (commission report, 2004, p. 28) . the commission report notes a lack of coordination among the multitude of private and public sector organizations involved, a lack of consistent information on what was really happening, and jurisdictional squabbling on who should be doing what. although sars vanished worldwide after june 2003, to this day it is still not clear why it became so virulent in the initial outbreak and why it has disappeared (yardley, 2005) . the sobig computer f virus spread in august 2003 (schwartz, 2003) . it affected many computer systems and threatened almost all computers connected to the internet. the damage was very costly. a variety of organizations around the world, public and private, attempted to deal with the problem. initially uncoordinated, there eventually emerged in an informal way a degree of informational networking on how to cope with what was happening (koerner, 2003) . 5 what can we generalize from not only these two cases, but also others that we looked at later 5 in may 2017, the so-called wannacry virus affected millions of computers across the world with ransomware. many hospitals were affected. (ansell et al., 2010) ? the characteristics we depict are stated in ideal-typical terms; that is, from a social science perspective, what the phenomena would be if they existed in pure or perfect form. first, the threat jumps across many international and national/political governmental boundaries. it crosses functional boundaries, jumping from one sector to another, and crossing from the private into public sectors (and sometimes back). there was, for example, the huge spatial leap of sars from a rural area in china to metropolitan toronto, canada. second, a transboundary threat can spread very fast. cases of sars went around the world in less than 24 hours with a person who had been in china flying to canada quickly infecting persons in toronto. the spread of the sobig f virus was called the fastest ever (thompson, 2004) . this quick spread is accompanied by a very quick if not almost simultaneous global awareness of the risk because of mass media attention. third, there is no known central or clear point of origin, at least initially, along with the fact that the possible negative effects at first are far from clear. this stood out when sars first appeared in canada. there is much ambiguity as to what might happen. ambiguity is of course a major hallmark of disasters and crises (turner, 1978) . it is more pervasive in transboundary crises as information about causes, characteristics and consequences is distributed across the system. fourth, there are potentially if not actual large number of victims, directly or indirectly. the sobig computer virus infected 30% of email users in china, that is about 20 million people and about three fourths of email messages around the world were infected by this virus (koerner, 2003) . in contrast to the geographic limits of most past disasters, the potential number of victims is often open ended in disruptions that span across boundaries. fifth, traditional "solutions" or approachesembedded in local and/or professional institutions will not always work. this is rather contrary to the current emphasis in emergency management philosophy. the prime and first locus of planning and managing cannot be the local community as it is presently understood. international and transnational organizations must typically be involved very early in the initial response (boin, ekengren, & rhinard, 2013) . the nation state may not even be a prime actor in the situation. sixth, although responding organizations and groups are major players, there is an exceptional amount of emergent behavior and the development of many informal ephemeral linkages. in some respects, the informal social networks generated, involving much information networking, are not always easily identifiable from the outside, even though they are often the crucial actors at the height of the crisis. in this section, we sketch several future scenarios that most likely would create transboundary disasters. even though some of the scenarios discussed might seem to be science fiction in nature, the possibilities we discuss are well within the realm of realistic scientific possibilities. the most obvious scenario revolves around asteroids or comets hitting planet earth (di justo, 2005) . this has, of course, happened in the past, but even more recent impacts found no or relatively few human beings around. there are two major possibilities with respect to impact (mcguire, 2000; wisner, 2004) . a landing in the ocean would trigger a tsunami-like impact in coastal areas. just the thinking of the possibility of how, when and where ahead of time coastal population evacuations might have to be undertaken, is a daunting thought. statistically less likely is a landing in a heavily populated area. but a terrestrial impact anywhere on land would generate very high quantities of dust in the atmosphere, which will affect food production as well as creating economic disruption. this would be akin to the tambora volcanic eruption in 1813, which led to very cold summers and crop failures (post, 1977) . the planning and management problems for handling something like this would be enormous. the explosion of space shuttle columbia scattered debris over a large part of the united states. this relatively small disastercompared to a comet or asteroid impactinvolved massive crossing of boundaries, a large number of potential victims, and could not be managed by local community institutions. the response required that an unplanned effort coordinating organizations that had not previously worked with one another and other unfamiliar groups, public and private (ranging from the us forest service to local red cross volunteers to regional medical groups), be informally instituted over a great part of the united states (beck & plowman, 2013; donahue, 2003) . a second scenario is the inadvertent or deliberate creation of biotechnological disasters. genetic engineering of humans or food products is currently in its infancy. the possible good outcomes and products from such activity are tremendous (morton, 2005) and are spreading around the world (pollack, 2005) . but the double-edged possibilities mentioned earlier are also present. there is dispute over genetically modified crops, with many european countries resisting and preventing their use and spread in their countries. while no major disaster or crisis from this biotechnology has yet occurred, there have been many accidents and incidents that suggest that this will be only a matter of time. for example, in 2000, starlink corn, approved only for animal feed is found in the food supply, such as taco shells and other groceries. the same year farmers in europe learned that that they had unknowingly been growing modified canola using mixed seed from canada. in 2001, modified corn was found in mexico even though it was illegal to plant in that country. that same year, experimental corn that had been engineered to produce a pharmaceutical that was found in soybeans in the state of nebraska. in several places, organic farmers found that it was impossible for them to keep their fields uncontaminated (for further details about all these incidents and other examples, see pollack, 2004) . noticeable is the leaping of boundaries and uncertainty about the route of spreading. it does not take much imagination to see that a modified gene intended for restricted use, could escape and create a contamination that could wreak ecological and other havoc. perhaps even more disturbing to some is genetic engineering involving human beings. the worldwide dispute over cloning, while currently perhaps more a philosophical and moral issue, does also partly involve the concern over creating flawed human-like creatures. it is possible to visualize not far-fetched worst-case scenarios that could be rather disastrous. it should be noted that even when there is some prior knowledge of a very serious potential threat, what might happen is still likely to be as ambiguous and complex as when sars first surfaced. this can be seen in the continuing major concern expressed in 2004 to mid-2005 about the possible pandemic spread of avian influenza, the so called "bird flu" (nuzzo, 2004; thorson & ekdahl, 2005) . knowledge of the evolution and spread of new pandemics, their effects and whether presently available protective measures would work, may well be very limited. knowledge that it might occur provides very little guidance on what might actually happen. it is possible to imagine the destruction of all food supplies for human beings either through the inadvertent or deliberate proliferation of very toxic biotechnological innovations for which no known barriers to spreading exists. these potential kinds of global disasters are of relatively recent origins and we may expect more such possibilities in the future. the human race is opening up potentially very catastrophic possibilities by innovations in nanotechnology, genetic engineering and robotics (barrat, 2013; joy, 2000; makridakis, 2017) . a potential is not an actuality. but it would be foolish from both a research as well as a planning and managing viewpoint to simply ignore these and other doomsday possibilities. the question might be asked if there is a built-in professional bias among disaster and crisis researchers and emergency planners to look for and to expect the worst (see mueller, 2004 for numerous examples). in the disaster and crisis area, this orientation is reinforced by the strong tendency of social critics and intellectuals to stress the negative. 6 it would pay to look at the past, see what was projected at a particular time, and then to look at what actually happened. the worldwide expectations about what would happen at the turn of the century to computers are now simply remembered as the y2k fiasco. it would be a worthy study to take projections by researchers about the future of ongoing crises and disasters, and then to look at what actually happened. in the 1960s, in the united states, scholars made rough analyses about the immediate future course of racial and university riots in the country. their initial appearances had not been forecasted. moreover, there was a dismal record in predicting how such events would unfold (no one seemed to have foreseen that the riots would go from ghetto areas to university campuses), as well as that they rather abruptly stopped. we should be able to do a better job than we have so far in making projections about the future. but perhaps that is asking more of disaster and crisis researchers than is reasonable. after all, social scientists with expertise in certain areas, to take recent examples, failed completely to predict or forecast the non-violent demise of the soviet union, the peaceful transition in south africa, or the development of a market economy in communist china (cf. tetlock, 2005) . a disaster or crisis always occurs in some kind of social setting. by social setting we mean social systems. these systems can and do differ in social structures and cultural frameworks. there has been a bias in disaster and crisis research towards focusing on specific agents and specific events. thus, there is the inclination of social science researchers to say they studied this or that earthquake, flood, explosion and/or radioactive fallout. at one level that is nonsense. these terms refer to geophysical, climatological or physical happenings, which are hardly the province of social scientists. instead, those focused on the social in the broad sense of the term should be studying social phenomena. our view is that what should be looked at more is not the possible agent that might be involved, but the social setting of the happening. this becomes obvious when researchers have to look at such happenings as the 2004 southeast asia tsunami or locust infestations in africa. both of these occasions impacted a variety of social systems as well as involving social actors from outside those systems. this led in the tsunami disaster to sharp cultural clashes regarding on how to handle the dead between western european organizations who came into look mostly for bodies of their tourist citizens, and local groups who had different beliefs and values with respect to dead bodies (scanlon, personal communication with first author). the residents of the andaman islands lived at a level many would consider "primitive". at the time of the 2004 tsunami in southeast asia, they had no access to modern warning systems. but prior to the tsunami, members of the tribal communities saw signs of disturbed marine life and heard unusual agitated cries of sea birds. this was interpreted as a sign of impending danger, so that part of the population got off the beaches and retreated inland to the woods and survived intact (icpac report, 2006) . there is a need to look at both the current social settings as well as certain social trends that influence disasters and crises. in no way are we going to address all aspects of social systems and cultural frameworks or their social evolution, either past or prospective. instead, we will selectively discuss and illustrate a few dimensions that would seem to be particularly important with respect to crises and disasters. what might these be? let us first look at existing social structures around the world. what differences are there in authority relationships, social institutions and social diversity? as examples, we might note that australia and the united states are far more governmentally decentralized than france or japan (bosner, 6 for example, rees (2004) , a cosmologist at cambridge university, gives civilization as we know it only a 50-50 chance of surviving the 21st century. schoff, 2004) . this affects what might or might not happen at times of disasters (it is often accepted that top-down systems have more problems in responding to crises and disasters). but what does it mean for the management of transboundary disruptions, which require increased cooperation between and across systems? will decentralized systems be able to produce "emergent" transboundary cooperation? as another example, mass media systems operate in rather different ways in china compared with western europe. this is important because to a considerable extent the mass communication system (including social media) is by far the major source of "information" about a disaster or a crisis. they play a major role in the social construction of disasters and crises. for a long time in the former soviet union, even major disasters and overt internal conflicts by way of riots were simply not openly reported (berg, 1988) . and only late in 2005 did chinese authorities announce that henceforth death tolls in natural disasters would be made public, but not for other kinds of crises (kahn, 2005) . another social structural dimension has to do with the range of social diversity in different systems (bolin & stanford, 2006) . social groupings and categories can be markedly different in their homogeneity or heterogeneity. the variation, for instance, can be in terms of life styles, class differences or demographic composition. the aging population in western europe and japan is in sharp contrast to the very young populations in most developing countries. this is important because the very young and the very old incur disproportionately the greatest number of fatalities in disasters. human societies also differ in terms of their cultural frameworks. as anthropologists have pointed out, they can have very different patterns of beliefs, norms, and values. as one example, there can be widely held different conceptions of what occasions disasters and crises. the source can be attributed to supernatural, natural, or human factors as indicated earlier. this can markedly affect everything from what mitigation measures might be considered to how recovery and reconstruction will be undertaken. norms indicating what course of action should be followed in different situations can vary tremendously. for example, the norm of helping others outside of one's own immediate group at times of disasters and crises ranges from full help to none. thus, although the kobe earthquake was an exception, any extensive volunteering in disasters was very rare in japan (for a comparison of the us and japan, see hayashi, 2004) . in societies with extreme cross-cultural ethnic or racial differences, volunteering to help others outside of one's own group at times of disasters or crisis is almost unknown. social structures and cultural frameworks of course are always changing. to understand future disasters and crises, it is necessary to identify and understand trends that may be operative with respect to both social structures and cultural frameworks. in particular, for our purposes, it is important to note trends that might be cutting across structural and cultural boundaries. globalization has been an ongoing force. leaving aside the substantive disputes about the meaning of the term, what is involved is at least the increasing appearance of new social actors at the global level. with respect to disaster relief and recovery, there is the continuing rise of transnational or international organizations such as un entities, the european union, religiously oriented groupings, and the world bank (boin et al., 2013) . with the decline of the importance of the nation state (guéhenno, 1995; mann, 1997) , more and new social actors, especially of an ngo nature, are to be anticipated. the rise of the information society has enabled the development of informal social networks that globally cut across political boundaries. this trend will likely increase in the future. such networks are creating social capital (in the social science sense) that will be increasingly important in dealing with disasters and crises. at the cultural level, we can note the greater insistence of citizens that they ought to be actively protected against disasters and crises (beck, 1999) . this is part of a democratic ideology that has spread around the world. that same ideology carries an inherent paradox: the global citizen may not appreciate government interference in everyday life, but expects government to show up immediately when acute adversity hits. finally, there has been the impact of the 9/11 attacks especially on official thinking not just in the united states but elsewhere also. this happening has clearly been a "focusing event" (as birkland, 1997 uses the term) and changed along some lines, certain values, beliefs and norms (smelser, 2004; tierney, 2005) . there is a tendency, at least in the us after 9/11, to think that all future crises and disasters will be new forms of terrorism. one can see this in the creation of the us department of homeland security, which repeated errors in approach and thinking that over 50 years of research have shown to be incorrect (e.g., an imposition of a command and control model, assuming that citizens will react inappropriately to warnings, seeing organizational improvisation as bad managing, see dynes, 2003) . these changes were accompanied by the downgrading of fema and its emphasis on mitigation (cohn, 2005) . valid or not, such ideas influence thinking about transboundary disasters and crises (and not just in the united states). the ideas expressed above and the examples used were intended to make several simple points. they suggest, for instance, that an earthquake of the same magnitude in france to one in iran will probably be reacted to differently. a riot in sweden will be a different phenomenon than one in myanmar. to understand and analyze such happenings requires taking into account the aspects just discussed. it is hard to believe that countries that currently have no functioning national government, such as somalia and the democratic republic of the congo or marginally operatives ones such as afghanistan, will have the same reaction to disasters and crises as societies with fully functional national governments. different kinds of disasters and crises will occur in rather different social settings. in fact, events that today are considered disasters or crises were not necessarily so viewed in the past. in noting these cross-societal and cross-cultural differences, we are not saying that there are no universal principles of disaster and crisis behavior. there is considerable research evidence supportive of this notion. we would argue, for example, that many aspects of effective warning systems, problems of bureaucracies in responding, the crucial importance of the family/household unit are roughly the same in all societies. to suggest the importance of cross-societal and cross-cultural differences is simply to suggest that good social science research needs to take differences into account while at the same time searching for universal principles about disasters and crises. this is consistent with those disaster researchers and scholars (e.g., oliver-smith, 1994) who have argued that studies in these areas have badly neglected the historical context of such happenings. of course, this neglect of the larger and particularly historical context has characterized much social science research of any kind (wallerstein, 1995) ; it is not peculiar to disaster and crisis studies. one trend that affects the character of modern crises and disasters is what we call the social amplifications of crises and disasters. pidgeon, kasperson, and slovic (2003) described a social augmentation process with respect to risk. to them, risk not only depends on the character of the dangerous agent itself but how it was seen in the larger context in which it appeared. the idea that there can be social amplification of risk rests on the assumption that aspects relevant to hazards interact with processes of a psychological, social, institutional, and cultural nature in such a manner that they can increase or decrease perceptions of risk (kasperson & kasperson, 2005) . it is important to note that the perceived risk could be raised or be diminished depending on the factors in the larger context, which makes it different from the vulnerability paradigm which tends to assume the factors involved will be primarily negative ones. we have taken this idea and extended it to the behaviors that appear in disasters and crises. extreme heat waves and massive blizzards are hardly new weather phenomena (burt, 2004) . there have recently been two heat waves, however, that have new elements in them. in 2003, a long lasting and very intensive heat wave battered france. nearly 15,000 persons died (and perhaps 22,000-35,000 in all of europe). particularly noticeable was that the victims were primarily socially isolated older persons. another characteristic was that officials were very slow in accepting the fact that there was a problem and so there was very little initial response (lagadec, 2004) . there was a similar earlier happening 1995 in chicago not much noticed until reported in a study seven years later (see klinenberg, 2002) . it exhibited the same features, that is, older isolated victims, bureaucratic indifference, and mass media uncertainty. at the other temperature extreme, in 1998, canada experienced an accumulation of snow and ice that went considerably beyond the typical. the ice storm heavily impacted electric and transport systems, especially around montreal. the critical infrastructures being affected created chain reactions that reached into banks and refineries. at least 66 municipalities declared a state of emergency. such a very large geographic area was involved that many police were baffled that "there was no scene", no "ground zero" that could be the focus of attention (scanlon, 1998) . there were also many emergent groups and informal network linkages (scanlon, 1999) . in some ways, this was similar to what happened in august 2003, when the highly interconnected eastern north american power grid started to fail when three transmission lines in the state of ohio came into contact with trees and short circuited (townsend & moss, 2005) . this created a cascade of power failures that resulted in blackouts in cities from new york to toronto and eventually left around 50 million persons without power, which, in turn, disrupted everyday community and social routines (ballman, 2003) . it took months of investigation to establish the exact path of failure propagation through a huge, complex network. telecommunication and electrical infrastructures entwined in complex interconnected and network systems spread over a large geographic area with multiple end users. therefore, localized disruptions can cascade into large-scale failures (for more details, see townsend & moss, 2005) . such power blackouts have occurred among others in auckland, new zealand in 1998 (newlove, stern, & svedin, 2002) ; in buenos aires in 1999 (ullberg, 2004); in stockholm in 2001 and in siberian cities in 2001 (humphrey, 2003 ; in moscow in 2005 (arvedlund, 2005 ; in brazil in 2009 (brooks, 2009); in bangladesh in 2014 (al-mahmood, 2014 , and in sri lanka in 2016 (lbo, 2016). all of these cases initially involved accidents or software and hardware failures in complex technical systems that generate severe consequences creating a crisis with major economic and often political effects. these kinds of crises should have been expected. a national research council report (1989) forecast the almost certain probability of these kinds of risks in future network linkages. blackouts can also be deliberately created either for good or malevolent reasons having nothing to with problems in network linkages. employees of the now notorious enron energy company, in order to exploit western energy markets, indirectly but deliberately took off line a perfectly functioning las vegas power plant so that rolling blackouts hit plant-dependent northern and central california with about a million residences and businesses losing power (egan, 2005) . in the earliest days of electricity in new york city, the mayor ordered the power cut off when poor maintenance of exposed and open wires resulted in a number of electrocutions of citizens and electrical workers (jonnes, 2004) . one should not think of blackouts as solely the result of mechanical or physical failures creating chain-like cascades. most disasters are still traditional ones. for example, four major hurricanes hit the state of florida in 2004. we saw very little in what we found that required thinking of them in some major new ways, or even in planning for or managing them. the problems, individual or organizational, that surfaced were the usual ones, and how to successfully handle them is fairly well known. more important, emergent difficulties were actually somewhat better handled than in the past, perhaps reflecting that officials may have had exposure to earlier studies and reports. thus, the warnings issued and the evacuations that took place were better than in the past. looting concerns were almost non-existent and less than ten percent indicated possible mental health effects. the pre-impact organizational mobilization and placement of resources beyond the community level was also better. the efficiency and effectiveness of local emergency management offices were markedly higher than in the past. not everything was done well. long known problematical aspects and failures to implement measures that research had suggested a long time ago were found. there were major difficulties in interorganizational coordination. the recovery period was plagued by the usual problems. even the failures that showed up in pre-impact mitigation efforts were known. the majority of contemporary disasters in the united states are still rather similar to most of the earlier ones. what could be seen in the 2004 hurricanes in florida was rather similar to what the disaster research center (drc) had studied there in the 1960s and the 1970s. as the electronic age goes beyond its birth and as other social trends continue, new elements may appear creating new problems that will necessitate new planning. if and when that happens, we may have rather new kinds of hurricane disasters, but movement in that direction will be slow. as the famous sociologist herbert blumer used to say in his class lectures a long time ago, it is sometimes useful to check whatever is theoretically proposed against personal experience. in 2005, an extensive snowstorm led to the closing of almost all schools and government offices in the state of delaware. this was accompanied by the widespread cancellations of religious and sport events. there was across the board disruption of air, road and train services. all of this resulted in major economic losses in the millions of dollars. there were scattered interruptions of critical life systems. the governor issued a state of emergency declaration and the state as well as local emergency management offices fully mobilized. to be sure, what happened did not fully rival what surfaced in the canadian blizzard discussed earlier. but it would be difficult to argue that it did not meet criteria often used by many to categorize disasters. what happened was not that different from what others and we had experienced in the past. in short, it was a traditional disaster. finally, at the same time we were thinking about the florida hurricanes and the delaware snowstorm, we also observed other events that many would consider disasters or crises. certainly, a bp texas plant explosion in 2005 would qualify. it involved the third largest refinery in the country. more than a hundred were injured and 15 persons died. in addition, there was major physical destruction of refinery equipment and nearby buildings were leveled. there was full mobilization of local emergency management personnel (franks, 2005) . at about the same time, there were landslides in the state of utah and california; a stampede with hundreds of deaths in a bombay, india temple, train and plane crashes in different places around the world, as well as large bus accidents; a dam rupture which swept away five villages, bridges and roads in pakistan; recurrent coal mine accidents and collapses in china; recurrent false reports in asia about tsunamis that greatly disrupted local routines; sinking of ferries with many deaths, and localized riots and hostage takings. at least based on press reports, it does not seem that there was anything distinctively new about these occasions. they seem to greatly resemble many such prior happenings. unless current social trends change very quickly in hypothetical directions (e.g., marked changes as a result of biotechnological advances), for the foreseeable future there will continue to be many traditional local community disasters and crises (such as localized floods and tornadoes, hostage takings or mass shootings, exploding tanker trucks or overturned trains, circumscribed landslides, disturbances if not riots at local sport venues, large plant fires, sudden discoveries of previously unknown very toxic local waste sites, most airplane crashes, stampedes and panic flights in buildings, etc.). mega-disasters and global crises will be rare in a numerical and relative sense, although they may generate much mass media attention. for example, the terrorist attacks in european cities (madrid in 2004; london in 2005; paris in 2015; brussels, nice, munich berlin in 2016; stockholm and manchester in 2017) were certainly major crises and symbolically very important, but numerically there are far more local train wrecks and car collisions everyday in many countries in the world. the more localized crises and disasters will continue to be the most numerous, despite the rise of transboundary crises and disasters. what are some of the implications for planning and managing that result from taking the perspective we have suggested about crises and disasters? if our descriptions and analyses of such happenings are valid, there would seem to be the need for new kinds of planning and preparation for the management of future crises and disasters (ansell et al., 2010) . non-traditional disasters and crises require some non-conventional processes and social arrangements. they demand innovative thinking "outside of the box" (boin & lagadec, 2000; lagadec, 2005) . this does not mean that everything has to be new. as said earlier, all disasters and crises share certain common dimensions or elements. for example, if early warning is possible at all, research has consistently shown that acceptable warnings have to come from a legitimately recognized source, have to be consistent, and have to indicate that the threat or risk is fairly immediate. these principles certainly pertain to the management of transboundary disruptions. actually, if traditional risks and hazards and their occasional manifestations were all we needed to be worried about, we would be in rather good shape. as already said several times, few threats actually manifest themselves in disasters. for example, in the 14,600 plus tornadoes appearing in the united states between 1952 and 1973, there were casualties in only 497 of them, and 26 of these occasions accounted for almost half of the fatalities (noji, 2000) . similarly, it was noted in 1993 that while about 1.3 million people had been killed in earthquakes since 1900, over 70% of them had died in only 12 occurrences (jones, noji, smith, & wagner, 1993, p. 19) . we can say that risks and hazards and their relatively rare manifestations in crises and disasters are being coped with much better than they ever were even just a half-century ago. for example, there has been a remarkable reduction in certain societies of fatalities and even property destruction in some natural disaster occasions associated with hurricanes, floods and earthquakes (see scanlon, 2004 for data on north america). in the conflict area, the outcomes have been much more uneven, but even here, for example, the recurrence of world wars seems very unlikely. but transboundary crises and disasters require some type of transboundary cooperation. for example, let us assume that a health risk is involved. if international cooperation is needed, who talks with whom about what? at what time is action initiated? who takes the lead in organizing a response? what legal issues are involved (e.g., if health is the issue, can health authorities close airports?)? there might be many experts and much technical information around; if so, and they are not consistent, whose voice and ideas should be followed? what should be given priority? how could a forced quarantine be enforced? what of ethical issues? who should get limited vaccines? what should the mass media be told and by who and when? at a more general level of planning and managing, we can briefly indicate, almost in outline form, a half dozen principles that ought to be taken into account by disaster planners and crisis managers. first, a clear connection should be made between local planning and transboundary managing processes. there usually is a low correlation between planning and managing, even for traditional crises and disasters. but in newer kinds of disasters and crises, there are likely to be far more contingencies. planning processes need to be rethought and enhanced to help policymakers work across boundaries. second, the appearance of new emergent social phenomena (including groups and behaviors) needs to be taken into account. there are always new or emergent groups at times of major disasters and crises, but in transboundary events they appear at a much higher rate. networks and network links have to be particularly taken into account. third, there is the need to be imaginative and creative. the response to hurricane katrina suggests how hard it can be to meet transboundary challenges. but improvisation can go a long way. a good example is found in the immediate aftermath of 9/11 in new york. in spite the total loss of the new york city office of emergency management and its eoc facility, a completely new eoc was established elsewhere and started to operate very effectively within 72 h after the attack. there had been no planning for such an event, yet around 750,000 persons were evacuated by water transportation from lower manhattan (kendra & wachtendorf, 2016; kendra, wachtendorf, & quarantelli, 2003) . fourth, exercises and simulations of disasters and crises must take into account transboundary contingencies. most such training and educational efforts along such lines are designed to be like scripts for plays. that is a very poor model to use. realistic contingencies, unknown to most of the players in the scenarios, force the thinking through of unconventional options. even more important, policymakers need to be explicitly trained in the management of transboundary crises and disasters. fifth, planning should be with citizens and their social groups, and not for them. there is no such thing as the "public" in the sense of some homogenous entity (blumer, 1948) . there are only individual citizens and the groups of which they are members. the perspective from the bottom up is crucial to getting things done. this has nothing to do with democratic ideologies; it has instead to do with getting effective and efficient planning and managing of disasters and crises. related to this is that openness with information rather than secrecy is mandatory. this runs against the norms of most bureaucracies and other organizations. the more information the mass media and citizens have, the better they will be able to react and respond. however, all this is easier said than done. finally, there is a need to start thinking of local communities in ways different than they have been traditionally viewed. up to now, communities have been seen as occupying some geographical space and existing in some chronological time. instead, we should visualize the kinds of communities that exist today are in cyberspace. these newer communities must be thought of as existing in social space and social time. viewed this way, the newer kinds of communities can be seen as very important in planning for and managing disasters and crises that cut across national boundaries. to think this way requires a moving away from the traditional view of communities in the past. this will not be easy given that the traditional community focus is strongly entrenched in most places around the world (see united nations, 2005) . but "virtual reality communities" will be the social realities in the future. assuming that what we have written has some validity, what new research should be undertaken in the future on the topic of future disasters and crises? in previous pages, we suggested some future studies on specific topics that would be worthwhile doing. however, in this section we want to outline research of a more general nature. for one, practically everything we discussed ought to be looked at in different cultures and societies. as mentioned earlier, there is a bias in our perspective that reflects our greater familiarity with and awareness of examples from the west (and even more narrowly western europe, the united states and canada). in particular, there is a need to undertake research in developing rather than only developed countries. and that includes at least some of these studies being undertaken by researchers and scholars from the very social systems that are being studied. the different cultural perspectives that would be brought to bear might be very enlightening, and enable us to see things that presently we do not see, being somewhat a prisoner of our own culture. second, here and there in this chapter, we have suggested that it is important to study the conditions that generate disasters and crises. but there has to be at least some understanding of the nature of x before there can be a serious turn to ascertaining the conditions that generate x. we have taken this first step in this chapter. future work should focus more on the generating conditions. a general model would involve the following ideas. the first is to look at social systems (societal, community and/or organizational ones), and to analyze how they have become more complex and tightly coupled. the last statement would be treated as a working hypothesis. if that turns out to be true, it could then be hypothesized that systems can break down in more ways than ever before. a secondary research thrust would be to see if systems also have developed ways to deal with or cope with threatening breakdowns. as such, it might be argued that what ensues is an uneven balance between resiliency and vulnerability. in studying contemporary trends, particular attention might be given to demographic ones. it would be difficult to find any country today where the population composition is not changing in some way. the increasing population density in high risk areas seems particularly important. another value in doing research on this topic is that much demographic data are of a quantitative nature. we mentioned financial and economic collapses cutting across different systems. how can financial collapse conceivably be thought of as comparable in any way to natural disasters and crises involving conflict? one simple answer is that for nearly a hundred years, one subfield of sociology has categorized, for example, panic flight in theater fires and financial panics as generic subtypes within the field of collective behavior (blumer, 1939; smelser, 1963) . both happenings involve new, emergent behaviors of a non-traditional nature. in this respect, scholars long ago put both types of behavior into the same category. although disaster and crisis researchers have not looked at financial collapses, maybe it is time that they did so. these kinds of happenings seem to occur very quickly, are ambiguous as to their consequences, cut across political and sector boundaries, involve a great deal of emergent behavior and cannot be handled at the community level. in short, what has to be looked for are genotypic characteristics not phenotypic ones (perry, 2004) . if whales, human beings, and bats can all be usefully categorized as mammals for scientific research purposes, maybe students of disasters should also pay less attention to phenotypic features. if so, should other disruptive phenomena like aids also be approached as disasters? our overall point, is that new research along the lines indicated might lead researchers to seeing phenomena in ways different than they had previously seen. finally, we have said little at all about the research methodologies that might be necessary to study transboundary ruptures. up to now, disaster and crisis researchers have argued that the methods they use in their research are indistinguishable from those used throughout the social sciences. the methods are simply applied under circumstances that are relatively unique (stallings, 2002) . in general, we agree with that position. but two questions can be raised. first, if social scientists venture into such areas as genetic engineering, cyberspace, robotics and complex infectious diseases, do they need to have knowledge of these phenomena to a degree that they presently do not have? this suggests the need for actual interdisciplinary research. social scientists ought to expand their knowledge base before venturing to study certain disasters and crises, especially the newer ones. there is something here that needs attention. in the sociology of science there have already been studies of how researchers from rather different disciplines studying one research question, interact with one another and what problems they have. researchers in the disaster and crisis area should look at these studies. our view is that the area of disasters and crises is changing. this might seem to be a very pessimistic outlook. that is not the case. there is reason to think, as we tried to document earlier, that human societies in the future will be able to cope with whatever new risks and hazards come into being. to be sure, given hazards and risks, there are bound to be disasters and crises. a risk free society has never existed and will never exist. but while this general principle is undoubtedly true, it is not so with reference to any particular or specific case. in fact, the great majority of potential dangers never manifest themselves eventually in disasters and crises. finally, we should note again that the approach in this chapter has been a heuristic one. we have not pretended that we have absolute and conclusive research-based knowledge or understanding about all of the issues we have discussed. this is in line with alexander (2005, p. 97 ) who wrote that scientific research is never ending in its quest for knowledge, rather than trying to reach once-for-all final conclusions, and therefore "none of us should presume to have all the answers". confronting catastrophe: new perspective on natural disasters the meaning of disaster: a reply to wolf dombrowsky bangladesh power restored after nationwide blackout: bangladesh, india blame each other for power failure managing transboundary crises: identifying the building blocks of an effective response system blackout disrupts moscow after fire in old power station the great blackout of 2003. disaster recovery the seeds of civilization trial by fire: forest fires our final invention: artificial intelligence and the end of the human era temporary, emergent interorganizational collaboration in unexpected circumstances: a study of the columbia space shuttle response effort world risk society uncovering soviet disasters after disaster: agenda setting, public policy, and focusing events historical overview of u.s. emergency management. unpublished draft prepared for college courses for emergency managers collective behavior public opinion and public opinion polling the european union as crisis manager: patterns and prospects governing after crisis: the politics of investigation, accountability and learning what is a disaster? further perspectives on the question preparing for the future: critical challenges in crisis management the northridge earthquake: vulnerability and disaster disaster preparedness: how japan and the united states compare brazil government defends reliability of power grid after blackout leaves 60 million in dark blame and hostility in disaster extreme weather: a guide & record book fema's new challenges. washington times where most see a weather system, some see divine retribution incident, accident, catastrophe: cyanide on the danube environmental risks and hazards asteroids are coming. wired collapse incident management teams: all-risk operations and management study scapegoats, villains and disasters blame in disaster: another look, another viewpoint the lisbon earthquake in 1755: contested meanings in the first modern disaster finding order in disorder: continuities in the 9-11 response sociology of disasters: contributions of sociology to disaster research disasters, collective behavior and societal organization tapes show enron arranged plant shutdown a new species of trouble: explorations in disaster, trauma, and community handbook of crisis and emergency management the sociology of disaster: definitions, research questions and measurements. continuation of discussion in a post-september 11 environment bp texas plant had fire day before blast disaster the end of the nation state assessment of post-event management processes using multi-media disaster simulation (pp. 2-25-2-30) mega-crises: understanding the prospects, nature, characteristics, and the effects of cataclysmic events wounded cities: destruction and reconstruction in a globalized world nature conservation and natural disaster management: the role of indigenous knowledge in kenya. report by igad climate prediction and applications centre (icpac) the eight plague: west africa's locust invasion disaster: a "reality or construct? casualty in earthquakes new york unplugged 1889 why the future doesn't need us. wired china to shed secrecy over its natural disasters the social contours of risk: risk communication and the social amplification of risk american dunkirk: the waterborne evacuation of manhattan on 9/11 the evacuation of lower manhattan by water transport on september 11: an unplanned success heat wave: a social autopsy of disaster in chicago in computer security, a bigger reason to squirm sociological inquiry and disaster research exploring the eu's role as transboundary crisis manager: the facilitation of sense-making during the ash-crisis ruptures creatrices. paris: editions d'organisation understanding the french 2003 heat wave experience: beyond the heat, a multi-layered challenge crossing the rubicon non-semitic deluge stories and the book of genesis. a bibliographic and critical survey sri lanka's island-wide blackout signals power supply reliability issue the forthcoming artificial intelligence (ai) revolution: its impact on society and firms has globalization ended the rise of the nation-state? cities and catastrophes: coping with emergency in european history sumatra earthquake and tsunami disaster by design: a reassessment of natural hazards in the united states biology's new forbidden fruit a false sense of insecurity? regulation urban catastrophes and imperial relief in the eighteenth-century british atlantic world: three case studies a disaster by any other name growing vulnerability of the public switched networks: implications for national security emergency preparedness. washington, d.c the moral cataclysm: why we struggle to think and feel differently about natural and man-made disasters auckland unplugged. stockholm: ocb/the swedish agency for civil emergency planning public health consequences of disasters the next pandemic? peru's five hundred year earthquake: vulnerability in historical context anthropological research on hazards and disasters normal accidents: living with high-risk technologies disaster exercise outcomes for professional emergency personnel and citizen volunteers introduction to emergency management in the united states. washington what is a disaster? new answers to old questions the exxon valdez oil spill and chronic psychological stress social amplification of risk can biotech crops be good neighbors open-source practices for biotechnology framing the news: the triggers, frames and messages in newspaper coverage disaster studies: an analysis of the social historical factors affecting the development of research in the area community crises: an exploratory comparison of the characteristics and consequences of disasters and riots what is a disaster? london: routledge disaster planning, emergency management and civil protection: the historical development of organized efforts to plan for and to respond to disasters. preliminary paper # 301 statistical and conceptual problems in the study of disasters. disaster prevention and management dissensus and consensus in community emergencies: patterns of looting and property norms our final hour: a scientist's warning: how terror, error and environmental disaster threaten humankind's future in this century-on earth and beyond managing crises, threats, dilemmas, opportunities coping with crises: the management of disasters, riots and terrorism tsunami: the vocabulary of disaster military support to civil authorities: the eastern ontario ice storm emergent groups in established frameworks: ottawa carleton's response to the 1998 ice disaster a perspective on north american natural disasters wounded cities: destruction and reconstruction in a globalized world a history of dams crisis management in japan and the united states: creating opportunities for cooperation and dramatic change some contributions german katastrophensoziologie can make to the sociology of disaster old virus has a new trick: mailing itself in quantity theory of collective behavior as cultural trauma cities and catastrophes: coping with emergency in european history conflict in natural disaster: a codification of consensus and conflict theories methods of disaster research why things bite back expert political judgment. princeton virus underground avian influenza-is the world on the verge of a pandemic? and can it be stopped? the 9/11 commission and disaster management: little depth, less context, not much guidance the social roots of risk: producing disasters, promoting resilience facing the unexpected: disaster preparedness and response in the united states telecommunications infrastructure in disasters: preparing cities for crisis communication man-made disasters the buenos aires blackout: argentine crisis management across the public-private divide know risk the resilient city: how modern cities recover from disasters letter from the president. international sociological association newsletter 2 hydropolitics of the nile valley the popular culture of disaster: exploring a new dimension of disaster research handbook of disaster research the societal implications of a comet/asteroid impact on earth: a perspective from international development studies social science and natural hazards after its epidemic arrival, sars vanishes urban space as "field" aspects of late ottoman town planning after fire cities and catastrophes: coping with emergency in european history key: cord-015255-1qhgeirb authors: busby, j s; onggo, s title: managing the social amplification of risk: a simulation of interacting actors date: 2012-07-11 journal: j oper res soc doi: 10.1057/jors.2012.80 sha: doc_id: 15255 cord_uid: 1qhgeirb a central problem in managing risk is dealing with social processes that either exaggerate or understate it. a longstanding approach to understanding such processes has been the social amplification of risk framework. but this implies that some true level of risk becomes distorted in social actors’ perceptions. many risk events are characterised by such uncertainties, disagreements and changes in scientific knowledge that it becomes unreasonable to speak of a true level of risk. the most we can often say in such cases is that different groups believe each other to be either amplifying or attenuating a risk. this inherent subjectivity raises the question as to whether risk managers can expect any particular kinds of outcome to emerge. this question is the basis for a case study of zoonotic disease outbreaks using systems dynamics as a modelling medium. the model shows that processes suggested in the social amplification of risk framework produce polarised risk responses among different actors, but that the subjectivity magnifies this polarisation considerably. as this subjectivity takes more complex forms it leaves problematic residues at the end of a disease outbreak, such as an indefinite drop in economic activity and an indefinite increase in anxiety. recent events such as the outbreaks in the uk of highly pathogenic avian influenza illustrate the increasing importance of managing not just the physical development of a hazard but also the social response. the management of hazard becomes the management of 'issues', where public anxiety is regarded less as a peripheral nuisance and more as a legitimate and consequential element of the problem (leiss, 2001) . it therefore becomes as important to model the public perception of risk as it does to model the physical hazard-to understand the spread of concern as much as the spread of a disease, for example. in many cases the perception of risk becomes intimately combined with the physical development of a risk, as beliefs about what is risky behaviour come to influence levels of that behaviour and thereby levels of exposure. one of the main theoretical tools we have had to explain and predict public risk perception is the social amplification of risk framework due to kasperson et al (1988) . as we explain below, this framework claims that social processes often combine to either exaggerate or underplay the risk events experienced by a society. this results in unreasonable and disproportionate reactions to risks, not only among the lay public but also among legislators and others responsible for managing risk. but since its inception the idea of a 'real', objective process of social risk amplification has been questioned (rayner, 1988; rip, 1988 ) and, although work in risk studies and risk management continues to use the concept, it has remained problematic. the question is whether, if we lose the notion of some true risk being distorted by a social process, we lose all ability to anticipate and explain perplexing social responses to a risk event in a way that is informative to policymakers. we explore this question in the context of risks surrounding the outbreaks of zoonotic diseases-that is, diseases that cross the species barrier to humans from other animals. recent cases of zoonotic disease, such as bse, sars, west nile virus and highly pathogenic avian influenza (hpai), have been some of the most highly publicised and controversial risk issues encountered in recent times. many human diseases are zoonotic in origin but in cases such as bse and hpai the disease reservoirs remain in the animal population. this means that a public health risk is bound up with risk to animal welfare, and often risk to the agricultural economy, to food supply chains and to wildlife. this in turn produces difficult problems for risk managers and policymakers, who typically want to avoid a general public amplifying the risk and boycotting an industry and its products, but also want to avoid an industry underestimating a risk and failing to practice adequate biosecurity. the bse case in particular has been associated with ideas about risk amplification (eg, eldridge and reilly, 2003) and continues to appear in the literature (lewis and tyshenko, 2009) . other zoonoses, such as chronic wasting disease in deer herds, have also been seen as recent objects of risk amplification (heberlein and stedman, 2009) . in terms of the social reaction, not all zoonoses are alike. endemic zoonoses like e. coli 157 do periodically receive public attention-for example following outbreaks at open farms and in food supply chains. but it is the more exotic zoonoses like bse and hpai that are more clearly associated with undue anxiety and ideas about social risk amplification. yet these cases also showed how uncertain the best, expertly assessed, supposedly objective risk level can be, and this makes it very problematic to retain the idea of an objective process of social risk amplification. such cases are therefore an important and promising setting for exploring the idea that amplification is only in the heads of social actors, and for exploring the notion that this might nonetheless produce observable, and potentially highly consequential, outcomes in a way that risk managers need to understand. our study involved two main elements, the second of which is the main subject of this article: 1. exploratory fieldwork to examine how various groups perceived risks and risk amplification in connection with zoonoses like the avian influenza outbreaks in 2007; 2. a systems dynamics simulation to work out what outcomes would emerge in a system of social actors who attributed amplification to other actors. in the remainder of the paper we first outline the fieldwork and its outcomes, and then describe the model and simulation. although the article concentrates on the latter, the two parts provide complementary elements of a process of theorising (kopainsky and luna-reyes, 2008) : the fieldwork, subjected to grounded analysis, produces a small number of propositions that are built into the systems dynamics model, and the model both operationalises these propositions and explores their consequences when operationalised in this way. the modelling is a basis for developing theory that is relevant to policy and decision making, rather than supporting a specific decision directly. a discussion and conclusion follow. traditionally, the most problematic aspect of public risk perception has been seen as its sometimes dramatic divergence from expert assessments-and the way in which this divergence has been seen as an obstacle both to managing risks specifically and to introducing new technology more generally. this has produced a longstanding interest in the individual perception of risk (eg, slovic, 1987) and in the way that culture selects particular risks for our attention (eg, douglas and wildavsky, 1982) . it has led to a strong interest in risk communication (eg, otway and wynne, 1989) . and it has been a central theme in the social amplification of risk framework (or sarf) that emerged in the late 1980s (kasperson et al, 1988) . the notion behind social risk amplification, developed in a series of articles (kasperson et al, 1988; renn, 1991; burns et al, 1993; kasperson and kasperson, 1996) , is that a risk event produces signals that are processed and sometimes amplified by a succession of social actors behaving as communication 'stations'. they interact and observe each other's responses, sometimes producing considerable amplification of the original signal. a consequence is that there are often several secondary effects, such as product boycotts or losses of institutional trust, that compound the effect of the original risk event. a substantial amount of empirical work has been conducted on or around the idea of social amplification, for example showing that the largest influence on amplification is typically organisational misconduct (freudenberg, 2003) . it continues to be an important topic in the risk literature, not least in connection with zoonosis risks (eg, heberlein and stedman, 2009; lewis and tyshenko, 2009 ). there has always been a substantial critique of the basic idea of social risk amplification. its implication that there is some true or accurate level that becomes amplified is hard to accept in many controversial and contested cases where expertise is lacking or where there is no expert consensus (rayner, 1988) . the phenomenon of 'dueling experts' is common in conflicts over environmental health, for instance (nelkin, 1995) . more generally, the concept of risk amplification seems to suggest that there is a risk 'signal' that is outside the social system and is somehow amplified by it (rayner, 1988) . this seems misconceived when we take the view that ultimately risk itself is a social construction (hilgartner, 1992) or overlay on the world (jasanoff, 1993) . and it naturally leads to the view that contributors to the amplification, such as the media (bakir, 2005) , need to be managed more effectively, and that risk managers should concentrate on fixing the mistake in the public mind (rip, 1988) , when often it may be the expert assessment that is mistaken. it thus becomes hard to sustain the idea that there is a social process by which true levels of risk get distorted. and this appears to undermine the possibility that risk managers can have a way of anticipating very high or very low levels of social anxiety in any particular case. once risk amplification becomes no more than a subjective judgment by one group on another social group's risk responses, it is hard to see how risk issues can be dealt with on an analytical basis. however, subjective beliefs about risk can produce objective behaviours, and behaviours can interact to produce particular outcomes. and large discrepancies in risk beliefs between different groups are still of considerable interest, whether or not we can know which beliefs are going to turn out to be more correct. in the remainder of this article we therefore explore the consequences of the idea that social risk amplification is nothing more than an attribution, or judgment that one social actor makes of another, and try to see what implications this might have for risk managers based on a systems dynamics model. before this, however, we describe the fieldwork whose principal findings were meant to provide the main structural properties of the model. the aim of the fieldwork was to explore how social actors reason about the risks of recent zoonotic disease outbreaks, and in particular how they make judgments of other actors systematically amplifying or attenuating such risks. this involved a grounded, qualitative study of what a number of groups said in the course of a number of unstructured interviews and focus groups. it follows the general principle of using qualitative empirical work as a basis for systems dynamics modelling (luna-reyes and andersen, 2003) . focus groups were used where possible, for both lay and professional or expert actors; individual interviews were used where access could only be gained to relevant groups (such as journalists) as individuals. the participants were selected from a range of groups having a stake in zoonotic outbreaks such as avian influenza incidents and are listed in table 1 . the focus groups followed a topic guide that was initially used in a pilot focus group and continually refined throughout the programme. they started with a short briefing on the specific topic of zoonotic diseases, with recent, well-publicised examples. the professional and expert groups were also asked to explain their roles in relation to the management of zoonotic diseases. participants were then invited to consider recent cases and other examples they knew of, discuss their reactions to the risks they presented, and discuss the way the risks had been, or were being, managed. their discussions were recorded and the recordings transcribed except in two cases where it was only feasible to record researcher notes. the individual interviews followed the same format. analysis of the transcripts followed a typical process of grounded theorising (glaser and strauss, 1967) , in which the aim was to find a way of categorising participants' responses that gave some theoretical insight into the principle of risk amplification as a subjective attribution. the categories were arrived at in a process of 'constant comparison' of the data and emerging, tentative categories until all responses have been satisfactorily categorised in relation to each other (glaser, 2002) . in glaser's words, 'validity is achieved, after much fitting of words, when the chosen one best represents the pattern. it is as valid as it is grounded'. our approach also drew on template analysis (king, 1998) in that we started with the basic categories of attributing risk amplification and risk attenuation, not a blank sheet. a fuller account of the analysis process and findings is given in a parallel publication (busby and duckett, 2012) . the first main theme to emerge from the data was the way in which actors privilege their own views, and construct reasons to hold on to them by finding explanations for other views as being systematically exaggerated or underplayed. it is surprising in a sense that this was relatively symmetrical. we expected expert groups to characterise lay groups as exaggerating or underplaying risk, but we also expected lay groups to use authoritative risk statements from expert groups and organisations of various kinds as ways of correcting their own initial and tentative beliefs. but there was no evidence for this kind of corrective process. the reasons that informants gave for why other actors systematically amplify or attenuate risk were categorised under five main headings: cognition, or the way they formed their beliefs; disposition, or their inherent natures; situation, or the particular circumstances; strategy, or deliberate, instrumental action; and structure, or basic patterns in the social or physical world. for example, one group saw the highly pathogenic avian influenza (hpai) outbreak at holton in the uk in 2007 as presenting a serious risk and explained the official advice that it presented only a very small risk as arising from a conspiracy between industry and government that the dispositions of the two naturally created. this second main theme was that some groups of informants often lacked specific and direct knowledge about relevant risks, and resorted to reasoning about other actors' responses to those risks. this reasoning involved moderating those observations with beliefs about whether other actors are inclined to amplify or attenuate risk. lay groups received information through the media but they had definite, and somewhat cliche´d, beliefs about the accuracy of risk portrayals in the media, for example. thus some informants saw the media treatment of hpai outbreaks as risk amplifying and portrayed the media as having an incentive to sensationalise coverage, but others (particularly virologists) saw media coverage as risk attenuating out of scientific ignorance. a third theme was that risk perceptions often came from the specific associations that arose in particular cases. for example, the holton hpai outbreak involved a large food processing firm that had earlier been involved in dietary and nutritional controversies. the firm employed intensive poultry rearing practices and was also importing partial products from a processor abroad. this particular case therefore bound together issues of intensive rearing, global sourcing, zoonotic outbreaks and lifestyle risks-incidental associations that enabled some informants to perceive high levels of risk and indignation, and portray others as attenuating this risk. the fourth theme was that some actors have specific reasons to overcome what they see as other actors' amplifications or attenuations. they do not just discount another actor's distortions but seek to change them. for example, staff in one government agency believed they had to correct farmers who were underplaying risk and not practicing sufficient bio-security, and also correct consumers who were exaggerating risk and boycotting important agricultural products. such actors do not simply observe other actors' expressed risk levels but try to communicate in such a way as to influence these expressed levels-for example through awareness-raising campaigns. the fieldwork therefore pointed to a model in which actors like members of the public based their risk evaluations on what they were told by others, corrected in some way for what they expected to be others' amplifications or attenuations; discrepancies between their current evaluations and those of others would be regarded as evidence of such amplifications, rather than being used to correct their own evaluations. the findings also indicated a model in which risk managers would communicate risk levels in a way that was intended to overcome the misconceptions of actors like the public. these are the underpinning elements of the models we describe below. systems dynamics was a natural choice for this modelling on several grounds. first, there is an inherent stress on endogeneity in the basic idea of social risk amplification, and in particular in the notion that it is an attribution. risk responses first and foremost reflect the way people think about risks and think about the responses of other people to those risks. second, the explicit and intuitive representation of feedback loops was important to show the reflective nature of social behaviour: how actors see the impact of their risk responses on other actors and modify their responses accordingly. third, memory plays an important part in this, since the idea that some actor is a risk amplifier will be based on remembering their past responses, and the accumulative capacity of stocks in systems dynamics provides an obvious way of representing social memory. developing a systems dynamics model on the grounded theory therefore followed naturally, and helped to add a deductive capability to the essentially inductive process of grounded theory (kopainsky and luna-reyes, 2008) . kopainsky and luna-reyes (2008) also point out that grounded theory can produce large and rich sets of evidence and overly complex theory, making it important to have a rigorous approach to concentrating on small numbers of variables and relationships. thus, in the modelling we describe in the next section, the aim was to try to represent risk amplification with as little elaboration as possible, so that it would be clear what the consequences of the basic structural commitments might be. this meant reduction to the simplest possible system of two actors, interacting repeatedly over time during the period of an otherwise static risk event (such as a zoonosis outbreak). applications of systems dynamics have been wide-ranging, addressing issues in domains ranging from business (morecroft and van der heijden, 1992) to military (minami and madnick, 2009) , from epidemiology (dangerfield et al, 2001) to diffusion models in marketing (morecroft, 1984) , from modelling physical state such as demography (meadows et al, 2004) to mental state such as trust martinez-moyano and samsa, 2008) . applications to issues of risk, particularly risk perception, are much more limited. there has been some application of system dynamics to the diffusion of fear and sarf, specifically (burns and slovic, 2007; sundrani, 2007) , but not to the idea of social amplification as an attribution. probably the closest examples to our work in the system dynamics literature deal with trust. luna-reyes et al (2008), for example, applied system dynamics to investigate the role of knowledge sharing in building trust in complex projects. to make modelling tractable, the authors make several simplifying assumptions including the aggregation of various government agencies as a single actor and various service providers as another actor. each actor accumulates the knowledge of the other actor's work, and the authors explore the dynamics that emerge from their interaction. greer et al (2006) modelled similar interactions-this time between client and contractor-each having its own, accumulated understandings of a common or global quantity (in this case the 'baseline' of work a project). martinez-moyano and samsa (2008) developed a system dynamics model to support a feedback theory of trust and confidence. this represented the mutual interaction between two actors (government and public) in a social system where each actor assesses the trustworthiness of the other actor over time, with both actors maintaining memories of the actions and outcomes of the other actor. our approach draws from all these studies, modelling a system in which actors interact on the basis of remembered, past interactions as they make assessments of some common object. the actors are in fact groups of individuals who are presumed to be acting in some concerted way. although this may seem questionable there are several justifications for doing so: (1) the aim is not to represent the diversity of the social world but to explore the consequences of specific ideas about phenomena like social risk amplification; (2) in some circumstances a 'risk manager' such as a private corporation or a government agency may act very much like a unit actor, especially when it is trying to coordinate its communications in the course of risk events; (3) equally in some circumstances it may be quite realistic to see a 'public' as acting in a relatively consensual way whose net, aggregate or average response is of more interest than the variance of response. in the following sections we develop a model in three stages. in the first, we represent the conventional view of social risk amplification; in the second, we add our subjective, attributional approach in a basic form; and in the third we make the attributional elements more realistically complex. the aim is to explore the implications of the principal findings of the fieldwork, and our basic theoretical commitments to social risk amplification as an attribution, with as little further adornment as possible, while also incorporating elements shown in the literature to be important aspects of risk amplification. in the first model, shown in figure 1 , we represent in a simple way the basic notion of social risk amplification. the fundamental idea is that risk responses are socially developed, not simply the sum of the isolated reactions of unconnected individuals. the model represents a population as being in one of two states of worry. this is simpler than the three-state model of burns and slovic (2007) particularly adds to the model. there is also no need for a recovering or removal state, as in sir (susceptible infectious recovered) models (sterman, 2004, p 303) , since there is no concept of immunity and it seems certain that people can be worried by the same thing all over again. the flow from an unworried state to a worried state is a function of how far the proportion in the worried state exceeds that normally expected in regard to a risk event such as a zoonotic disease outbreak. members of the public expect some of their number to become anxious in connection with any risk issue: when, through communication or observation, they realise this number exceeds expectation, this in itself becomes a reason for others to become anxious. this observation of fellow citizens is not medium-specific, so it is a combination of observation by word-of-mouth, social networks and broadcast media. in terms of how this influences perception, various processes are suggested in the literature. for example, there is a variety of 'social contagion' effects (levy and nail, 1993; scherer and cho, 2003) relevant to such situations. social learning (bandura, 1977) or 'learning by proxy' (gardner et al, 2000) may also well be important. we do not model specific mechanisms but only an aggregate process by which the observation of worry influences the flow into a state of being worried. the flow out of the worried state is a natural relaxation process. it is hard to stay worried about a specific issue for any length of time, and the atrophy of vigilance is reported in the literature (freudenberg, 2003) . there is also a base flow between the states, reflecting the way in which-in the context of any public risk event-there will be some small proportion of the population that becomes worried, irrespective of peers and public information. this base flow also has the function of dealing with the 'startup problem' in which zero flow is a potential equilibrium for the model (sterman, 2004, p 322) . the public risk perception in this model stands in relation to an expert, supposedly authoritative assessment of the risk. people worry when seeing others worry, but moderate this response when exposed to exogenous information-the expert or managerial risk assessment. what ultimately regulates worry is some combination of these two elements and it is this regulatory variable that we call a resultant 'risk perception'. unlike burns and slovic (2007) we do not represent this as a stock because it is not anyone's belief, and so need not have inertia. the fact that various members of the public are in different states of worry means that there is no belief that all share, as such. instead, risk perception is an emergent construct on which flows between unworried and worried states depend (and which also determines how demand for risky goods changes, as we explain below). in the simplest model we simply take this resultant risk perception as a weighted geometric mean of the risk implied by the proportion of the population worried and the publically known expert risk assessment. the expert assessment grows from zero toward a finite level, for a certain period, before decaying again to zero. this reflects a time profile for typical risk events-for example zoonotic outbreaks such as sars-where numbers of reported cases climb progressively and rapidly to a peak before declining (eg, leung et al, 2004) . the units for risk perception and the expert assessment are arbitrary, but for exposition are taken as probabilities of individual fatality during a specific risk event. numerical values of the exogenous risk-related variables are based on an outbreak in which the highest fatality probability is 10 à3 . but risks in a modern society tend to vary over several orders of magnitude. typically, individual fatality probabilities of 10 à6 are regarded as 'a very low level of risk', whereas risks of 10 à3 are seen as very high and at the limit of tolerability for risks at work (hse, 2001) . because both assessed and perceived risks are likely to vary widely, discrepancies between risk levels are represented as ratios. the way in which the expert assessment is communicated to the public is via some homogenous channel we have simply referred to as the 'media'. in our basic model we represent in very crude terms the way in which this media might exaggerate the difference between expert assessment and public perception. but the sarf literature suggests there is no consistent relationship between media coverage and either levels of public concern or frequencies of fatalities (breakwell and barnett, 2003; finkel, 2008) , so the extent of this exaggeration is likely to be highly case specific. it is also possible that the media have an effect on responses by exaggerating to a given actor its own responses. the public, for example, could have an inflated idea of how worried they are because newspapers or blogs portray it to be so. but we do not represent this because it is so speculative and may be indeterminable empirically. finally, the base model also represents the way in which risk perception influences behaviour, in particular the consumption of the goods or services that expose people to the risk in question. the 2005 holton uk outbreak of hpai, for example, occurred at a turkey meat processing plant and affected demand for its products; the sars outbreak affected demand for travel, particularly aviation services. brahmbhatt and dutta (2008) even refer to the economic disruption caused by 'panicky' public responses as 'sars type' effects. there are many complications here, not least that reducing consumption of one amenity as a result of heightened risk perception may increase consumption of a riskier amenity. air travel in the us fell after 9/11 but travel by car increased and aggregate risk levels were said to have risen in consequence (gigerenzer, 2006) . a further complication is that in certain situations, such as bank runs (diamond and dybvig, 1983 ), risk perceptions are directly self-fulfilling rather than self-correcting. the most common effect is probably that heightened risk perceptions will lead to reduced demand for the amenity that causes exposure, leading to reductions in exposure and reductions in the expert risk assessment, but it is worth noting that the effect is case-specific. the expert risk assessment is therefore not exogenous, and there is a negative feedback loop that operates to counteract rising risk perceptions. as we show later from the simulation outcomes, the base model shows a public risk perception that can be considerably larger than the expert risk assessment. it therefore seems to show 'risk amplification'. but there is no variable that stands for risk in the model: there are only beliefs about risk (called either assessments or perceptions). the idea that social risk amplification is a subjective attribution, not an objective phenomenon, means that this divergence of risk perception and expert assessment does not amount to risk amplification. and it says that actors see others as being risk amplifiers, or attenuators, and develop their responses accordingly. this means that we need to add to sarf, and the basic model of the previous section, the processes by which actors observe, diagnose and deal with other actors' risk assessments or perceptions. what our fieldwork revealed was that the social system did not correct 'mistaken' risk perceptions in some simpleminded fashion. in other words, it was not the case that people formed risk perceptions, received information about expert assessment, and then corrected their perceptions in the correct direction. instead, as we explained earlier, they found reasons why expert assessments, and in fact the risk views of any other group, might be subject to systematic amplification or attenuation. they then corrected for that amplification. risk managers, on the other hand, had the task of overcoming what they saw as mistaken risk responses in other groups, not simply correcting for them. therefore in the second model, shown in figure 2 , we now have a subsystem in which a risk manager (a government agency or an industrial undertaking in the case of zoonotic disease outbreaks) observes the public risk perception in relation to the expert risk assessment, and communicates a risk level that is designed to compensate for any discrepancy between the two. commercial risk managers will naturally want to counteract risk amplification that leads to revenue losses from product and service boycotts, and governmental risk managers will want to counteract the risk amplification that produces panic and disorder. as beck et al (2005) report, the uk bse inquiry found that risk managers' approach to communicating risk 'was shaped by a consuming fear of provoking an irrational public scare'. the effect is symmetrical to the extent that the public in turn observes discrepancies between managerial communications and its own risk perceptions, and attributes amplification or attenuation accordingly. attributions are based on simple memory of past observations. this historical memory of another actor's apparent distortions is sometimes mentioned in the sarf literature (kasperson et al, 1988; poumadere and mays, 2003) . this memory is represented as stocks of observed discrepancies, reaching a level m i (t)for actor i at time t. the managerial memory, for example, is r public ðtþ r expert ðtþ dt m i (t)40 implies that actor i sees the other actor as exaggerating risk, while m i (t)o0 implies perceived attenuation. the specific deposits in an actor's memory are not retrievable, and equal weight is given to every observation that contributes to it. the perceived scale of amplification is the time average of memory content, and the confidence the actor has in this perceived amplification is 1àe à|m(t)| where confidence grows logarithmically towards unity as the magnitude of the memory increases. the managerial actor modifies the risk level it communicates by the perceived scale of public amplification raised to the power of its confidence, while the public adjusts the communicated risk level it takes account of by the perceived scale of managerial attenuation raised to the power of its confidence in this. in the third model, in figure 3 , we add three elements found in the risk amplification literature that become especially relevant to the idea of risk amplification as a subjective attribution: confusion, distrust and differing perceptions about the significance of behavioural change. the confusion issue reflects the way an otherwise authoritative actor's view tends to be discounted if it shows evidence of confusion, uncertainty or inexplicable change. two articles in the recent literature on zoonosis risk (bergeron and sanchez, 2005; heberlein and stedman, 2009 ) specifically describe the risk amplifying effect of the authorities seeming confused or uncertain. the distrust issue reflects the observation that 'distrust acts to heighten risk perception . . . ' (kasperson et al, 2003) , and that it is 'associated with perceptions of deliberate distortion of information, being biased, and having been proven wrong in the past' (frewer, 2003, p 126) . a distinguishing aspect of trust and distrust is the basic asymmetry such that trust is quick to be lost and slow to be gained (slovic, 1993) . in figure 3 , the confusion function is based on the rate of change of attributed amplification, not rate of change communication itself, since some change in communication might appear justified if correlated with a change in public perception: g ¼ 1 à e àg c g ðtþ j j ; where c g (t) is the change in managerial amplification in unit time. the distrust function is based on the extent of remembered attributed amplification: f ¼ 1 à e àf m g ðtþ j j ; where m g (t) is the memory of managerial risk amplification at time t and f is the distrust parameter. there is no obvious finding in the literature that would help us set the value of such a parameter. the combination of the confusion and distrust factors is a combination of an integrator and a differentiator. it is used to determine how much weight is given to managerial risk communications in the formation of the resultant risk perception. it is defined such that as distrust and confusion both approach unity, this weight w tends to zero: w ¼ w max (1àg)(1àf). this weight was exogenous in the previous model, so the effect of introducing confusion and distrust is also to endogenise the way observation of worry is combined with authoritative risk communication. the third addition in this model is an important disproportionality effect. the previous models assume that risk managers base their view of the public risk perception on some kind of direct observation-for example, through clamour, media activity, surveys and so on. in practice, the managerial view is at least partly based on the public's consumption of the amenity that is risk, for example the consumption of beef during the bse crisis, or flight bookings and hotel reservations during the sars outbreak. the problem is that when a foodstuff like beef becomes a risk object it may be easy for many people to stop consuming it, and such a response from the consumer's perspective can be proportionate to even a mild risk assessment. reducing beef consumption is an easy precaution for most of the population to take (frewer, 2003) , so rational even when there is little empirical evidence that there is a risk at all (rip, 2006) . yet this easy response of boycotting beef may be disastrous for the beef industry, and therefore seem highly disproportionate to the industry, to related industries and to government agencies supporting the industry. unfortunately there is considerable difficulty in quantifying this effect in general terms. recent work (mehers, 2011) looking at the effect of heightened risk perceptions around the avian influenza outbreak at a meat processing plant suggests that the influence on the demand for the associated meat products was very mixed. different regions and different demographic groups showed quite different reactions, for example, and the effect was confounded by actions (particularly price changes) taken by manufacturer and retailers. our approach is to represent the disproportionality effect with a single exogenous factorthe relative substitutability of the amenity for similar amenities on the supply and demand side. the risk manager interprets any change in public demand for the amenity multiplied by this factor as being the change in public risk perception. if the change in this inferred public risk perception exceeds that observed directly (for example by opinion survey), then it becomes the determinant of how risk managers think the public are viewing the risk in question. this relative substitutability is entirely a function of the specific industry (and so risk manager) in question: there is no 'societal' value for such a parameter, and the effects of a given risk perception on amenity demand will always be case specific. for example, brahmbhatt and dutta (2008) reported that the sars outbreak led to revenue losses in beijing of 80% in tourist attractions, exhibitions and hotels, but of 10-50% in travel agencies, airlines, railways and so on. the effects are substantial but a long way from being constant. in this section we briefly present the outcomes of simulation with two aims: first to show how the successive models produce differences in behaviour, if at all, and thereby to assess how much value there is in the models for policymakers; second to assess how much uncertainty in figure 3 model of a more complex attributional view of risk amplification. outcomes such as public risk perception is produced by uncertainty in the exogenous parameters. figure 4 shows the behaviour of the three successive models in terms of public risk perception and expert risk assessment. for the three models, the exogenous variables are set at their modal values and when variables are shared between models they have the same values. the expert risk assessment is thus very similar for each model, as shown in the figure, rising towards its target level, falling as public risk perception reduces exposure, and then ceasing as the crisis ends around day 40. in the base model, the public risk perception is eight times higher than the expert assessment at its peak, which occurs some 20 days after that in the expert assessment. but once the attributional view of risk amplification is modelled, this disparity becomes much greater, and it occurs earlier. in the simple attributional system the peak discrepancy is over 40 times, and in the complex attributional system nearly 400 times, both occurring within 8 days of the expert assessment peak. thus the effect of seeing risk amplification as the subjective judgment of one actor about another is, given the assumptions in our models, to polarise risk beliefs much more strongly and somewhat more rapidly. we can no longer call the outcome a 'risk amplification' since, by assumption, there is no longer an objective risk level exogenous to the social system. but there is evidently strong polarisation. there is some qualitative difference in the time profile of risk perception between the three models, as shown in the previous figure where the peak risk perception occurs earlier in the later models. there are also important qualitative differences in the time profiles of stock variables amenity demand and worried population, as shown in figure 5 . when the attributional view is taken, both demand and worry take longer to recover to initial levels, and when the more complex attributional elements are modelled (the effects of mistrust, confusion and different perceptions of the meaning of changes in demand), the model indicates that little recovery takes place at all. the scale of the recovery depends on the value of the exogenous parameters, and some of these (as we discuss below) are case specific. but of primary importance is the way the weighting given to managerial communications or expert assessment is dragged down by public attributions. this result indicates the importance of a complex, attributional view of risk amplification. unlike the base model, in the attributional model it is much more likely there will be an indefinite residue from a crisis-even when the expert assessment of risk falls to near zero. figures 6 and 7 show the time development of risk perception in the third model in terms of the mean outcome with (a) 95% confidence intervals on the mean and (b) tolerance intervals for 95% confidence in 90% coverage over 1000 runs, with triangular distributions assigned to the exogenous parameters and plausible ranges based solely on the author's subjective estimates. the exogenous parameters fall into two main groups. the first group is of case-specific factors and would be expected to vary between risk events. this includes, for example, the relative substitutability of the amenity that is the carrier of the risk, and the latency before changes in demand for this amenity change the level of risk exposure. the remaining parameters are better seen as social constants, since there is no theoretical reason to think that they will vary from one risk event to another. these include factors like the natural vigilance period among the population, the normal flow of people into a state of worry, the latency before people become aware of a discrepancy between emergent risk perception and the proportion of the population that is in a state of worry. figure 6 shows the confidence and tolerance intervals with the social constants varying within their plausible ranges and the case-specific factors fixed at their modal values, and figure 7 vice versa. thus figure 6 shows the effect of our uncertainty about the character of society, figure 4 outcomes of the three models. whereas figure 7 shows the effect of the variability we would expect among risk events. the substantial difference between the means in risk perception between the two figures reflects large differences between means and modes in the distributions attributed to the parameters, which arises because plausible ranges sometimes cover multiple orders of magnitude (eg, the confusion and distrust constants both range from 1 to 100 with modes of 10, and the memory constant from 10 to 1000 with a mode of 100). these figures do not give a complete understanding, not least because interactions between the two sets of parameters are possible, but they show a reasonably robust qualitative profile. figure 8 shows the 'simple' correlation coefficients between resultant risk perception and the policy-relevant exogenous parameters over time, as recommended by ford and flynn (2005) as an indication of the relative importance of model inputs. at each day of the simulation, the sample correlation coefficient is calculated for each parameter over the 1000 runs. no attempt has been made to inspect whether the most important inputs are correlated, and to refine the model in the light of this. nonetheless the figure gives some indication of how influential are the most prominent parameters: the expert initial assessment level (ie, the original scale of the risk according to expert assessment), the expert assessment adjustment time (ie, the delay in the official estimate reflecting the latest information), the base flow (the flow of people between states of non-worry and worry in relation to a risk irrespective of the specific social influences being modelled) and the normal risk perception (the baseline against which the resultant risk perception is gauged, reflecting a level of risk that would be unsurprising and lead to no increase in the numbers of the worried). the first of these is case-specific, but the other three would evidently be worth empirical investigation given their influence in the model. it is extremely difficult to test such outcomes against empirical data because cases differ so widely and it is unusual to find data on simultaneous expert assessments and public perceptions over short-run risk events like disease outbreaks, particularly outbreaks of zoonotic disease. but a world bank paper of 2008, on the economic effects of infectious disease outbreaks (primarily sars, a zoonotic disease), collected together data gathered on the 2003 sars outbreak, and some-primarily that of lau et al (2003) -showed the day-by-day development of risk perception alongside reported cases. figure 9 is based on lau et al's data (2003) , and shows the number of reported cases of sars as a proportion of the hong kong population at the time, together with the percentage of people in a survey expressing a perception that they had a large or very large chance of infection from sars. the two lines can be regarded as reasonably good proxies for the risk perception and expert assessment outcomes in figure 4 and they show a rough correspondence: a growth in both perception and expertly assessed or measured 'reality', followed by a decay, in which the perception appears strongly exaggerated from the standpoint of the expert assessment. the perceptual gap is about four orders of magnitude-greater than even the more complex attributional system in our modelling. moreover, the risk perception peak occurs early, and in fact leads the reported cases peak. it is our models 2 and especially 3 in which the perception peak occurs early (although it never leads the expert assessment peak). the implications of the work the social amplification of risk framework has always been presented as an 'integrative framework' (kasperson et al, 1988) , rather than a specific theory, so there has always been a need for more specific modelling to make its basic concepts precise enough to be properly explored. at the same time, as suggested earlier, its implication that there is some true level of risk that becomes distorted in social responses has been criticised for a long time. we therefore set out to explore whether it is possible to retain some concept of social risk amplification in cases where even expert opinion tends to be divided, the science is often very incomplete, and past expert assessment has been discredited. zoonotic disease outbreaks provide a context in which such conditions appear to hold. our fieldwork broadly pointed to a social system in which social actors of all kinds privilege their own risk views, in which they nonetheless have to rely on other actor's responses in the absence of direct knowledge or experience of the risks in question, in which they attribute risk amplification or attenuation to other actors, and in which they have reasons to correct for or overcome this amplification. to explore how we can model such processes has been the main purpose of the work we have described. and the resulting model provides specific indications of what policymakers need to deal with-a much greater polarisation of risk beliefs, and potentially a residue of worry and loss of demand after the end of a risk crisis. it also has the important implication that risk managers' perspectives should shift, from correcting a public's mistakes about risk to thinking about how their own responses and communications contribute to the public's views about a risk. our approach helps to endogenise the risk perception problem, recognising that it is not simply a flaw in the world 'out there'. it is thus an important step in becoming a more sophisticated risk manager or manager of risk issues (leiss, 2001) . it is instructive to compare this model with models like that of luna-reyes et al (2008) which essentially involve a convergent process arise from knowledge sharing, and the subsequent development of trust. we demonstrate a process in which there is knowledge sharing, but a sharing that is undermined by expectations of social risk amplification. observing discrepancies in risk beliefs leads not to correction and consensus but to self-confirmation and polarisation. our findings in some respects are similar to greer et al (2006) , who were concerned with discrepancies in the perceptions of workload in the eyes of two actors involved in a common project. such discrepancies arose not from exogenous causes but from unclear communication and delay inherent in the social system. all this reinforces the long-held view in the risk community, and of risk communication researchers in particular, that authentic risk communication should involve sustained relationships, and the open recognition of uncertainties and difficulties that would normally be regarded as threats to credibility (otway and wynne, 1989) . the reason is not just the moral requirement to avoid the perpetuation of powerful actors' views, and not just the efficiency requirement to maximise the knowledge base that contributes to managing a risk issue. the reason is also that the structure of interactions can be unstable, producing a polarisation of view that none of the actors intended. actors engaged with each other can realise this and overcome it. a basic limitation to the use of the models to support specific risk management decisions, rather than give more general insight into social phenomena, is that there are very few sources of plausible data for some important variables in the model, such as the relaxation delay defining how long people tend to stay worried about a specific risk event before fatigue, boredom or replacement by worry about a new crisis leads them to stop worrying. it is particularly difficult to see where values of the case-specific parameters are going to come from. other sd work on risk amplification at least partly avoids the calibration problem by using unit-less normalised scales and subjective judgments (burns and slovic, 2007) . and one of the benefits of this exploratory modelling is to suggest that such variables are worthwhile subjects for empirical research. but at present the modelling does not support prediction and does not help determine best courses of action at particular points in particular crises. in terms of its more structural limitations, the model is a small one that concentrates specifically on the risk amplification phenomenon to the exclusion of the many other processes that, in any real situation, risk amplification is connected with. as such, it barely forms a 'microworld' (morecroft, 1988) . it contrasts with related work such as that of martinez-moyano and samsa's (2008) modelling of trust in government, which similarly analyses a continuing interaction between two aggregate actors but draws extensively on cognitive science. however, incorporating a lot more empirical science does not avoid having to make many assumptions and selections that potentially stand in the way of seeing through to how a system produces its outcomes. the more elaborate the model the more there is to dispute and undermine the starkness of an interesting phenomenon. we have had to make few assumptions about the world, about psychology and about sociology before concluding that social risk amplification as little more than a subjective attribution has a strongly destabilising potential. this parsimony reflects towill's (1993) notion that we start the modelling process by looking for the boundary that 'encompasses the smallest number of components within which the dynamic behaviour under study is generated'. the model attempts to introduce nothing that is unnecessary to working out the consequences of risk amplification as an attribution. as ghaffarzadegan et al (2011) point out in their paper on small models applied to problems of public policy, and echoing forrester's (2007) argument for 'powerful small models', the point is to gain accessibility and insight. having only 'a few significant stocks and at most seven or eight major feedback loops', small models can convey the counterintuitive endogenous complexity of situations in a way that policymakers can still follow. they are small enough to show systems in aggregate, to stress the endogeneity of influences on the system's behaviour, and to clearly illustrate how policy resistance comes about (ghaffarzadegan et al, 2011) . as a result they are more promising as tools for developing correct intuitions, and for helping actors who may be trapped in a systemic interaction to overcome this and reach a certain degree of self-awareness (lane, 1999) . the intended contribution of this study has been to show how to model a long-established, qualitative framework for reasoning about risk perception and risk communication, and in the process deal with one of the main criticisms of this framework. the idea that in a society the perception of a risk becomes exaggerated to the point where it bears no relation to our best expert assessments of the risk is an attractive one for policymakers having to deal with what seem to be grossly inflated or grossly under-played public reactions to major events. but this idea has always been vulnerable to the criticism that we cannot know objectively if a risk is being exaggerated, and that expert assessments are as much a product of social processes as lay opinion. the question we posed at the start of the paper was whether, in dropping a commitment to the idea of an objective risk amplification, there is anything left to model and anything left to say to policymakers. our work suggests that there is, and that modelling risk amplification as something that one social actor thinks another is doing is a useful thing to do. there were some simple policy implications emerging from this modelling. for example, once you accept that there is no objective standard to indicate when risk amplification is occurring, actors are likely to correct for other actors' apparent risk amplifications and attenuation, instead of simple-mindedly correcting their own risk beliefs. this can have a strongly polarising effect on risk beliefs, and can produce residual worry and loss of demand for associated products and services after a crisis has passed. the limitations of the work point to further developments in several directions. first, there is a need to explore various aspects of how risk managers experience risk amplification. for example, the modelling, as it stands, concentrates on the interactions of actors in the context of a single event or issue-such as a specific zoonotic outbreak. in reality, actors generally have a long history of interaction around earlier events. we take account of history within an event, but not between events. a future step should therefore be to expand the timescale, moving from intra-event interaction to inter-event interaction. the superposition of a longer term process is likely to produce a model in which processes acting over different timescales interact and cannot simply be treated additively (forrester, 1987) . it also introduces the strong possibility of discontinuities, particularly when modelling organisational or institutional actors like governments whose doctrines can change radically following elections-rather like the discontinuities that have to be modelled to represent personnel changes and consequences like scapegoating (howick and eden, 2004) . another important direction of work would be a modelling of politics and power. it is a common observation in risk controversies that risk is a highly political construction-being used by different groups to gain resources and influence. as powell and coyle (2005) point out, the systems dynamics literature makes little reference to power, raising questions about the appropriateness of our modelling approach to a risk amplification subject-both in its lack of power as an object for modelling, and its inattention to issues of power surrounding the use of the model and its apparent implications. powell and coyle's (2005) politicised influence diagrams might provide a useful medium for representing issues of power, both within the model of risk amplification and in the understanding of the system in which the model might be influential. the notion, as currently expressed in our modelling, that it is always in one actor's interest to somehow correct another's amplification simply looks naı¨ve. greenpeace v. shell: media exploitation and the social amplification of risk framework (sarf) social learning theory public administration, science and risk assessment: a case study of the uk bovine spongiform encaphalopathy crisis media effects on students during sars outbreak world bank policy research working paper 4466, the world bank east asia and pacific region chief economist's office social amplification of risk and the layering method the diffusion of fear: modeling community response to a terrorist strike incorporating structural models into research on the social amplification of risk: implications for theory construction and decision making social risk amplification as an attribution: the case of zoonotic disease outbreaks model-based scenarios for the epidemiology of hiv/aids: the consequences of highly active antiretroviral therapy bank runs, deposit insurance, and liquidity risk and culture: an essay on the selection of technological and environmental dangers risk and relativity: bse and the british media the social amplification of risk perceiving others' perceptions of risk: still a task for sisyphus statistical screening of system dynamics models nonlinearity in high-order models of social systems system dynamics-the next fifty years institutional failure and the organizational amplification of risk: the need for a closer look trust, transparency, and social context: implications for social amplification of risk workers' compensation and family and medical leave act claim contagion how small system dynamics models can help the public policy process out of the frying pan into the fire: behavioral reactions to terrorist attacks conceptualization: on theory and theorizing using grounded theory the discovery of grounded theory improving interorganizational baseline alignment in large space system development programs socially amplified risk: attitude and behavior change in response to cwd in wisconsin deer the social construction of risk objects: or, how to pry open networks of risk on the nature of discontinuities in system dynamics modelling of disrupted projects reducing risks, protecting people bridging the two cultures of risk analysis the social amplification and attenuation of risk the social amplification of risk: assessing fifteen years of research and theory the social amplification of risk the social amplification of risk: a conceptual framework qualitative methods and analysis in organizational research: a practical guide closing the loop: promoting synergies with other theory building approaches to improve systems dynamics practice social theory and systems dynamics practice monitoring community responses to the sars epidemic in hong kong: from day 10 to day 62 the chamber of risks: understanding risk controversies a tale of two cities: community psychobehavioral surveillance and related impact on outbreak control in hong kong and singapore during the severe acute respiratory syndrome epidemic contagion: a theoretical and empirical review and reconceptualization the impact of social amplification and attenuation of risk and the public reaction to mad cow disease in canada collecting and analyzing qualitative data for system dynamics: methods and models knowledge sharing and trust in collaborative requirements analysis a feedback theory of trust and confidence in government the limits to growth: the 30-year update on the quantitative analysis of food scares: an exploratory study into poultry consumers' responses to the 2007 h5n1 avian influenza outbreaks in the uk food supply chain dynamic analysis of combat vehicle accidents strategy support models systems dynamics and microworlds for policymakers modelling the oil producers: capturing oil industry knowledge in a behavioural simulation model. modelling for learning, a special issue of the science controversies: the dynamics of public disputes in the united states risk communication: paradigm and paradox (guest editorial) the dynamics of risk amplification and attenuation in context: a french case study identifying strategic action in highly politicized contexts using agent-based qualitative system dynamics muddling through metaphors to maturity: a commentary on kasperson et al. 'the social amplification of risk' risk communication and the social amplification of risk should social amplification of risk be counteracted folk theories of nanotechnologists a social network contagion theory of risk perception perception of risk perceived risk, trust and democracy business dynamics: systems thinking and modelling for a complex world understanding social amplification of risk: possible impact of an avian flu pandemic. masters dissertation, sloan school of management and engineering systems division acknowledgements-many thanks are due to the participants in the fieldwork that underpinned the modelling, and to dominic duckett who carried out the fieldwork. we would also like to thank the anonymous reviewers of an earlier draft of this article for insights and suggestions that have considerably strengthened it. the work was partly funded by a grant from the uk epsrc. key: cord-030984-2mqn4ihm authors: davies, anna; hooks, gregory; knox-hayes, janelle; liévanos, raoul s title: riskscapes and the socio-spatial challenges of climate change date: 2020-08-20 journal: nan doi: 10.1093/cjres/rsaa016 sha: doc_id: 30984 cord_uid: 2mqn4ihm anthropogenic climate change is increasing the frequency and severity of the physical threats to human and planetary wellbeing. however, climate change risks, and their interaction with other “riskscapes”, remain understudied. riskscapes encompass different viewpoints on the threat of loss across space, time, individuals and collectives. this special issue of the cambridge journal of regions, economy, and society enhances our understanding of the multifaceted and interlocking dimensions of climate change and riskscapes. it brings together rigorous and critical international scholarship across diverse realms on inquiry under two, interlinked, themes: (i) governance and institutional responses and (ii) vulnerabilities and inequalities. the contributors offer a forceful reminder that when considering climate change, social justice principles cannot be appended after the fact. climate change adaptation and mitigation pose complex and interdependent social and ethical dilemmas that will need to be explicitly confronted in any activation of “green new deal” strategies currently being developed internationally. such critical insights about the layered, unequal and institutional dimensions of risks are of paramount import when considering other riskscapes pertaining to conflict and war, displaced people and pandemics like the 2019–2020 global covid-19 pandemic. this special issue of the cambridge journal of regions, economy, and society expands and enhances our understanding of the spatial, temporal, economic and sociological dimensions of climate change and riskscapes. riskscapes "play out in time and space" (müller-mahn and everts, 2018, 87) , encompassing different points of view on risk that highlight the "real-andimagined geographies based on individual and collective experience, tradition and knowledge" (müller-mahn et al., 2013 , 2025 . these articulations of riskscapes were preceded by theorists examining the social forces giving rise to diffuse risk distribution and contestation in modernity (see rosa et al., 2014) . by 2001, empirically inclined social scientists in the usa argued "social, economic, and political forces inevitably create myriad [environmental] riskscapes in which overlapping air pollution plumes emitted by [various sources]…lead to cumulative exposures that pose health risks to diverse communities" (morello-frosch et al., 2001, 572 ; see also fitzpatrick and lagory, 2003) . the 'riskscape' concept has subsequently been applied in a range of other contexts beyond air quality (müller-mahn and everts, 2018) . however, explicit examinations of climate change riskscapes and their interaction with other riskscapes remain comparatively understudied (cf. gebreyes and theodory, 2018) . this is surprising, given the attention to climate change risks and their configuration by leading international agencies, including the intergovernmental panel on climate change (ipcc). in its 2012 special report 'managing the risks of extreme events and disasters to advance climate change adaptation' (ipcc, 2012) , for example, the ipcc acknowledges the challenges of understanding and managing the risks related to climate change. in particular, they emphasise that climate change impacts have social and economic as well as physical dimensions. so, while changes in the frequency and severity of the physical events generated by climate change will affect risk, the spatially diverse and temporally dynamic patterns of exposure and vulnerability also need to be considered. indeed, they recognise that differences in vulnerability and exposure can arise from non-climatic factors and from multidimensional inequalities that are produced by uneven development processes. alongside this, the ipcc conclude, with a high degree of confidence, that climate-related hazards can exacerbate other stressors, often with negative outcomes for livelihoods, especially for those living in poverty. contemporary events, like the 2019-2020 global covid-19 pandemic, underscore the importance of understanding the layered dimensions of risks. as with climate vulnerabilities and public and environmental health (faber, 2015; gebreyes and theodory, 2018; klinenberg, 2002; solomon et al., 2016) , emerging accounts of the covid-19 pandemic indicate that communities facing elevated threats to their lives and livelihoods are those who are elderly, experience chronic medical conditions, and are socially, politically and economically marginalised (cdc, 2020; manderson and levine, 2020; raffaetà, 2020) . this special issue demonstrates how analysing riskscapes can advance our understanding of climate change dynamics. these insights can be extended to anticipate dynamics at work on the covid-19 pandemic. physically, climate change will have greatest effect on communities with high population density. these communities are often near coastlines and in flood plains where they may face vulnerabilities of exposure to rising sea levels and other threats (see liévanos, 2020) , or have overbuilt environments to a degree that greater meteorological variability has outsized effect (for example, superstorm sandy in new york) (faber, 2015 ; see also taylor and weinkle, 2020) . the high density of people in such spaces also make these communities more vulnerable to pandemics like covid-19. indeed, as of 1 may 2020, confirmed 478) and associated deaths (18,069) within the usa were concentrated in the major population centre of new york city (dong et al., 2020) . in both the case of climate and covid-19 vulnerability, the local overbearance of human settlements on the environment makes adaptation and mitigation still more difficult. overlap in terms of community economic risk and resilience between covid-19 and climate change is also substantial, highlighting the kind of 'double exposure' risks initially flagged by o'brien and leichenko (2000) and elaborated in leichenko and o'brien (2008) . cascading economic effects from covid-19 (lockdowns) and climate change have the greatest effect on the poorest and socially marginalised communities. similar to the realm of climate-related inequalities in the usa (faber, 2015; klinenberg, 2002) , the disparate impact of covid-19 on low income and under-represented minority communities has been extreme, with some states in the us reporting up to a third of deaths being among racial and ethnic minorities, and projections that the economic implications of the pandemic will also hit minority communities the hardest (cdc, 2020; hale, 2020) . the disparate outcomes from underlying inequalities are projected to extend across the global south (wood, 2020) . in separate realms of inquiry (for example, environmental studies, health disparities, economic geography, criminology, security and terrorism, disasters and spatial inequality), scholars have already documented that place matters when considering risk. exposure to risk and its consequences varies by where a social actor lives, works and the multiplicity of other contexts in which they engage in social interaction. too often, however, these risks have been studied in isolation, for example, heightened environmental exposure studied in isolation of elevated exposure to crime, elevated exposure to health risks without concern for heightened economic risks and so forth (muller et al., 2018) . the ramifications of this approach have been laid bare and not least during the covid-19 crisis, which has had substantial interwoven health, economic, political and social implications. in the reshaped world responding to this crisis, considering the complexities, nuances and place-specificities of riskscapes and climate change is now more important than ever. using the concept of riskscapes highlights the social, temporal and spatial texture of risk (neisser and müller-mahn, 2018) and calls attention to interactions among risks and their "cumulative impact" across several dimensions of human life and the biophysical environment (renn, 2008; solomon et al., 2016) . entire nations-concentrated in the global south-are at heightened risk of repeated cycles of war (collier, 2008) . these conflicts scar the environment in profound and enduring ways (smith et al., 2014) ; these wars are often precipitated by environmental dislocation, with climate change playing an increasingly prominent role (dunlap and brulle, 2015) . whether internally displaced or migrating across national borders, those forced to flee violence live with multiple risks and face an uncertain future (hooks, 2020; united nations high commissioner for refugees, 2017) . these communities are also at risk of comparable long-term impacts (stalled economies, overtaxed medical systems and nutritional shortages) of pandemics like covid-19. the origins of these risks and the forces that sustain them often operate on and across multiple spatial scales-from the local to the global. by studying climate change and riskscapes, it becomes possible to understand the "interdependencies and spillovers between risk clusters" (renn, 2008, 5 ; see also beck, 2009; neisser and müller-mahn, 2018) . prominent social theorists-most notably beck ([1986] 2005) and giddens (2015)have drawn attention to the pervasive and growing importance of risk in contemporary societies. this concern extends beyond the academy-it is of concern to the general public and policy makers. the growing reflexivity of late-modernity-made possible by an unprecedented capacity to gather, analyse and share information-not only creates unprecedented opportunities, it also creates unprecedented threats to individuals and entire societies. moreover, this capacity to compile and analyse information allows for a discourse centred on identifying, mapping and managing risks (beck, [1986 (beck, [ ] 2005 (beck, [ , 2009 giddens, 2015) . efforts to identify, avoid, mitigate and manage risks are transforming political and social institutions. building on his earlier work (beck, [1986] 2005), beck's (2009) "world risk society" thesis highlights the growing prominence of large-scale technological and industrial processes in modernity that has given rise to unstable global financial markets and climate change and associated threats for the broader public. extant social and political institutions are not equipped to manage such risks. these trends pose threats to the legitimacy of science and of political institutions because accurate risk analysis is often hindered by the indeterminable and uninsurable nature of human "manufactured" risks. because the "dangers posed by global warming aren't tangible, immediate or visible in the course of day-to-day life" , our collective response will be halting and insufficient (giddens, 2009, 2) . furthermore, the physical, social, political and economic risks are fundamentally interwoven together. communities with lower social resilience-for example, those divided by substantial class, racial, ethnic, gender or cultural cleavages that undermine shared trust-lack the cohesion to effectively understand and respond to crisis (gotham and greenberg, 2014) . the identification and response to risk occurs in an institutional context. experts and political economic elites are often entrusted with the authority to classify and organise risk for the broader public (beck, [1986 (beck, [ ] 2005 (beck, [ , 2009 clarke, 1989; freudenburg et al., 2009; perrow, [1984 perrow, [ ] 1999 . in the context of environmental risk, it has been shown that scientific, corporate and state actors are tightly coupled in decision-making processes that are predicated on the dynamics of maintaining the prestige and objectivity of scientific inquiry, capital accumulation and state legitimacy (beck, 2009) . furthermore, the institutional context of government policy and professional associations "incubate" the expert and elite organisation of risk in taken-for-granted norms of safety and "acceptable" codes of conduct-all of which are monitored and enforced by experts and elites (beamish, 2002; clarke, 1989; perrow, [1984 perrow, [ ] 1999 turner, 1978) . for example, political and economic actors and institutions across the world are refashioning previous capital accumulation strategies and their spatial and ecological "fixes" through financial instruments and market-based mechanisms that seek to mitigate against and adapt people and places to environmental disasters, terrorist threats and the climate crisis (castree and christophers, 2015; gotham and greenberg, 2014; knox-hayes, 2013; ouma et al., 2018) . these dynamics-that is, elite domination and the downplaying, normalising and obscuring of environmental risks-extends to military organisations and warfare (bonds, 2011) . risk is culturally embedded. research into this aspect of risk illustrates the importance of attending to the local "historical legacy and interpretive contexts to perceptions of risk" (beamish, 2001, 11) . auyero and swistun's (2009) ethnographic and historical study of flammable, a poor and heavily contaminated argentine shantytown, is instructive for understanding how the normalisation of risk can shape how it is subsequently understood and responded to. in this case, the normalisation of exposure to environmental health risk was associated with an institutionally organised confusion over the cause of environmental contamination and how to motivate and articulate collective solutions to that contamination. alternatively, as norgaard (2011) illustrates in the context of local responses to climate change, normative modes of thought and practice normalise risk-perpetuating practices even in the face of mounting evidence of the dangers of climate change. similarly, in her work, knox-hayes (2014 demonstrates that cultural knowledge and practice shape the adoption of universalistic solutions to climate mitigation, even in economic domains such as with the creation of emissions markets. in order to be more effective, policy makers must consider the way different communities and societies make value judgements, assess risks and devise strategies to respond (knox-hayes, 2016) . this special issue brings together international scholars at the forefront of empirical and conceptual thinking about riskscapes. their research refines and sharpens our understanding of climate change risk and riskscapes, integrating understandings of risk from across diverse realms on inquiry under two, interlinked, themes: governance and institutional responses and vulnerabilities and inequalities. this also acknowledges how governance and state in/action can exacerbate risks, a theme that is addressed by several of the articles in this issue. it is no surprise to find a strong focus across the articles in this special issue on how different locations and communities attempt to manage the sum of complex combinations of risks. understanding the form, dynamics and impacts of governing riskscapes lies at the heart of much intellectual inquiry and practical action. ravi raman's tightly contextualised article is focussed on the rebuilding of post-flood kerala, india (raman, 2020). the physical scars of flood events are visible reminders of not only risks and their spatiality, but also of how institutions respond to that spatiality. raman documents how various agencies, including local people and state and non-state actors, influence each phase of rescue, relief and rebuilding. local fisher-folk communities, for example, draw on their cultural knowledge of climate change and risk to rescue flood victims just as others have done elsewhere (see knox-hayes, 2016) . in addition to these local governing alliances, raman also flags the role of international institutional alliances-for example with the un agencies-in supporting humanitarian interventions. he argues these diverse but coordinated responses create a state-society synergy sensitised to the "ecospatiality" of riskscapes in kerala. the ecospatiality concept recognises that building resilience in the aftermath of an extreme event requires new consideration of the arrangement and assemblage of spaces that make dwelling and habitation more attuned to the specific geographical features and potential risks of a region. rather than relying on rehabilitation and restoration to previous conditions, the goal of the ecospatiality state-society synergy is to redevelop while laying the foundation for a more resilient, egalitarian and ethical society. raman's article has broader implications, as its findings align with the broader suggestion that the covid-19 pandemic, for example, presents societies with the opportunity to move forward with new technologies for enhanced energy efficiency and resilience (invest away from fossil fuels where markets have experienced collapse) rather than to return to the old normal (worldand, 2020) . meanwhile, iain white and judy lawrence explicitly focus on the governance challenges posed by climate change in new zealand (white and lawrence, 2020) . they emphasise that as climate change impacts are dynamic, uncertain and contested, they pose significant challenges to the ways in which policy actors imagine and manage risks across space and time. identifying and applauding major efforts to reflect the latest insights from risk research in national policy in new zealand, they nonetheless find significant challenges remain to be resolved if appropriate governance and implementation strategies are to be successfully designed and implemented. white and lawrence demonstrate how tensions emerge between the theory of riskscapes, which emphasises that risks are always in a state of becoming, and the practices of risk management which seek to periodically "fix" risks, through plans, for example, in order to address them. while this process is a familiar feature of public policy design and implementation in many arenas, time lags between establishing scientific consensus for a particular course of action, developing policy and implementing that policy tend to be extensive with respect to climate change. positively, white and lawrence's historically situated paper identifies a greater impetus for policy development, more extensive political consensus on action and widespread use of the language of contingency, uncertainty and dynamism in new zealand now than ever before. while this is narrowing the gap between riskscape theory and climate change policy practice, issues remain with regard to connecting complex climate change riskscape imaginaries-comprised of an assemblage of biophysical, social, economic or political forces-to governance arrangements that are able to address them. there have long been academic calls for anticipatory risk governance (see fuerth, 2009; rosa et al., 2014; quay, 2010 ) that can recognise and address the dynamism and uncertainty of climate change riskscapes. however, the means and mechanisms for operationalising anticipatory governance remain unclear. nowhere is the art of anticipation more foregrounded than within the realm of re/insurance. in their paper examining the riskscapes of re/ insurance in florida, zac taylor and jessica weinkle use riskscapes theory to draw critical insights from existing re/insurance debates (taylor and weinkle, 2020) . they extend müller-mahn et al.'s (2018) argument that riskscape thinking must directly contend with machinations of power, working to reveal the asymmetrical, ongoing and always-political nature of re/ insurance. ultimately, taylor and weinkle argue against the expansion of re/insurance markets to govern climate risks precisely because the riskscapes approach demonstrates the importance of geographical contingencies and the limits to marketisation given the contested and shifting nature of "extra-market" considerations. jonathon everts and katja müller reveal the pivotal role that extra-market considerations played in the dynamic german coal industry (everts and müller, 2020) while building on recent calls to bridge conceptualisations of riskscapes and scale (aalders, 2018; müller-mahn et al., 2018) . climate change appears more and more like a "boundary object" (star and griesemer, 1989) , a common reference point for conflicting parties who invoke different meanings about climate change for different reasons. in this article, the authors examine the intertwined scales of riskscapes of coal mining, regional economic development and climate change in germany. the authors bring brenner's (2001) notion of "politics of scaling" into the analysis of riskscapes and look at the ways in which coal mining structures and embeds deep socio-ecological vulnerabilities across time. they argue that understanding the intricate relationship within and between different riskscapes and practices of scaling (from the local to the global) provides us with an analytical handle for deciphering the complexities of economic and environmental politics. further, they argue that doing so points us toward the transformative potential that lies within rescaling risks. risk has temporal implications through the politics of scaling. detlef müller-mahn, mar moure and million gebreyes also take on the themes of multiscalar politics and anticipation within governance in their study of riskscapes and climate change in the african cases of ethiopia and côte d'ivoire (müller-mahn et al., 2020) . they argue for greater scrutiny of how space is structured by multiscalar connections and uneven power relations. in particular, they urge practitioners and scholars to give increased consideration to how the future is made present through risk management. this includes taking into account how futures are envisioned differently by diverse actors both substantially and in terms of time horizons, the extent to which different visions incorporate risk (or not), and ultimately, how visions and risk assessments translate into agency. comparing and contrasting the politics of anticipation with a riskscapes framing, müller-mahn and colleagues conclude that thinking in terms of riskscapes with its focus on the nested and contested nature of the future, better acknowledges the diversity of material conditions, discourses and practices. the work is particularly trenchant for efforts to address climate change. to succeed, communities will need to adapt governance frameworks and social policies to address present conditions. the structure, distribution and flexibility of governance have a profound impact on the capacity of communities to conduct successful climate change mitigation and adaption. governance must be attentive to the social, political, economic and environmental dimensions of crises like climate change to assess the risks that these bring and to generate integrated responses. the capacity of governing institutions to think systematically and holistically in rapidly evolving situations and to do so across a range of socially and politically constructed temporal and spatial scales is also critical. the successes and failures of governing structures are addressed throughout the above articles in this special issue. however, the efficacy of various responses must be evaluated holistically and across different dimensions of risk, and for different groups given their relation to a given risk. here the human and social dynamics of riskscapes come to the fore. prior research debates the nature and durability of vulnerable social and spatial positions within climate change riskscapes and other socio-spatial manifestations of risk. models of the "world risk society" (beck, 2009) or analogous "urban risk societies" (elliott and frickel, 2013; romero-lankao et al., 2013) posit and observe a "social boomerang" dynamic of diffuse risk distribution in the post-war era. in contrast, five contributions to this special issue illuminate the significance of local and regional "risk settings" (müller-mahn and everts, 2013) or "contextual environments" (leichenko and o'brien, 2008 ) that structure and interlock multiple exposures to climate change-related risks and other risks over time. these articles highlight the stark inequalities and differentiated vulnerabilities that individuals and communities face from climate change riskscapes. ann tickamyer and siti kusujiarti interrogate three disasters experienced in indonesia to identify how socio-spatial risks are differentiated within particular contexts (tickamyer and kusujiarti, 2020) . power and gender roles, relations and practices are shown to be significant in mapping the socio-spatial relations of the resulting riskscapes. tickamyer and kusujiarti use their empirical analysis to demonstrate how such insights might inform plans for climate resilience in indonesia. using three case studies from indonesia, the 2004 aceh tsunami, the 2006 bantul earthquake and the 2010 merapi eruption, the authors illustrate how spatial, social, cultural, religious and political structures affect the experience of disaster. across the cases, gender relations, social capital and community resources are intertwined as drivers of risk and resilience across varying riskscapes. in particular, where women are empowered, have greater equality and participation in public spheres as well as opportunities to develop social capital and leadership, their families and communities have greater response and resilience to hazards. building resilience therefore requires great social and gender equality. from the standpoint of governance, it also requires a shift from market-based policies with hierarchical and predatory political systems to systems immersed in civic engagement and community cohesion. in a similarly nuanced fashion, jesse divalli and tracy perkins analyse neighbourhoods in southwest washington, dc, usa as sites of disparate expert and lay risk identification and mitigation practice (divalli and perkins, 2020). they find that this context results in what müller-mahn and everts (2013) describe as a "space of tension" . highlighting the disproportionate power held by the city, which has resulted in development plans that rarely account for residents' visions for their homes, divalli and perkins argue that the neoliberal growth strategy is being resolved largely in favour of the district and developers the district favours. while this effort is likely to produce a more resilient city in some ways, according to certain metrics they may also displace many current residents in the process; perhaps illustrating the birth of a new form of climateproofed gentrification. as documented widely within climate change policy (see hügel and davies, 2020) , divalli and perkins find tensions between the rhetoric of planning strategies that claim to speak for all residents and the reality of limited public participation and engagement within them. this is particularly the case for low income, residents of colour who have long experienced disadvantage in other contexts. a false picture of increased resilience will be generated as people become displaced through the district's strategies, pushing vulnerable residents beyond their administrative borders through what divalli and perkins call "resilience through attrition" . conceptualising resilience-related redevelopment as a risk to vulnerable populations in this way pushes considerations of climate change and riskscapes into social interactions at the neighbourhood scale. policy and design professionals are increasingly urged to consider and mitigate these risks within resilience planning as is seen in the metro vancouver region in canada examined by lily yumagulova in this special issue (yumagulova, 2020) . this article uncovers barriers and enablers for resilience planning across canada's multiscalar governance systems. in particular, yumagulova uses empirical material to unpack the underlying mechanisms for producing, reproducing and disrupting unequal patterns of risk across the region in british columbia. she does this by examining the role of the historic and existing flood management regimes in enabling and constraining collaborative planning capacities to address future climate risks such as sea-level rise. yumagulova finds clear evidence of historically differentiated treatment of indigenous communities in terms of flood risk transfer in the area. in particular, the analysis shows that the flood management regime favoured investments in structural flood protection (such as dykes) for colonial settlements, while leaving indigenous communities exposed to flood risk. these historical decisions left a legacy of underdevelopment for contemporary indigenous residents that led to further inequalities. yumagulova makes a strong call for a greater presence of indigenous voices in future risk and resilience planning in the region if these inequalities are to be addressed. raoul liévanos's case study of stockton, california, usa offers three main contributions to the climate change and riskscape literature (liévanos, 2020) . first, it synthesises a conceptual framework from prior research that attends to how elites' use of racial categories and racist real estate investment and development patterns structure the spatial concentration of separate and interlocking climate, environmental and economic risk exposures over time in what he calls "high-risk neighbourhoods" . the study conceptualises climate, environmental and economic riskscapes, respectively, "as spatially varying vulnerabilities of exposure to sea-level rise, flooding, and adverse housing market incorporation and displacement" . liévanos draws on archival sources spanning 1930-2010 and an innovative coupling of geographic information systems and qualitative comparative analysis. he uncovers how different "configurations" or "recipes of risk" (grant et al., 2010, 480) involve the devaluation of particular racial groups and racially classified spaces, threatening housing market positions, unequal flood protections and elevated risk of exposure to climate-related sea-level rise in stockton's high-risk neighbourhoods. moving continents, but staying with the theme of vulnerability and inequalities, yvonne braun draws on the concepts of riskscapes and "syndemics" (de waal and whiteside, 2003; singer, 2011) to explore the (un)intended consequences of development, which can exacerbate existing vulnerabilities for communities in the southern african country of lesotho (braun, 2020) . braun finds that poverty, food security, inequality and health risks co-occur, particularly in relation to regional climate stressors and to the impacts of large-scale infrastructural development such as the lesotho highlands water project (lhwp), one of the five largest transnational construction projects active in the world. in lesotho, it was largely small-scale farming families who absorbed the most direct losses and stresses to their livelihoods from the lhwp project, and yet it is these same people who experience the greatest risk from current and future climate changes (twomlow et al., 2008) . braun argues that it is international and national development agendas which have created a series of displacements to increase rather than reduce vulnerability and risks. instead she urges those who govern to adopt a more holistic approach to their work; an approach that deliberately seeks to anticipate and mitigate interactive, syndemic relationships and their consequences. even as we applaud the contributions of this special issue, we recognise gaps and challenges. some of the most daunting challenges and disruptive changes that are being set in motion by climate change and the arc of human history have received too little attention. while climate change is a central theme throughout this special issue, several topics associated with it are not fully addressed. in this section, we consider connections between climate change riskscapes and three of these: conflict and warfare; migration and displacement; and pandemics. in the first decades of the 21st century, wars have been fought by and in middle income countries (for example, iraq, colombia and syria) and among the world's least developed countries (for example, central and eastern africa) (collier, 2008; hooks, 2020; kaldor, 2012; mann, 2018) . there is a growing body of literature which identifies the risks of climate change as a threat multiplier, linking the onset and dissemination of warfare and conflict with rapid environmental change (barnett and adger, 2007; gleick, 2014; kelley et al., 2015) . beyond the human suffering and infrastructural damage, wars degrade the social capital and institutional integrity needed to secure the peace. as a result, cycles of violence bring repeated bouts of conflict to people and places who can least afford it (collier, 2008; hooks, 2016) . multi-sided wars and conflict among irregular armed forces create overlapping risks, including (but not limited to) environmental degradation, predatory commandeering of the economy, gender and age-related coerced labour and enslavement, and systematic degradation of the infrastructure (health, transportation and communication). these threats intersect with existing riskscapes, amplifying risks, heightening inequality and crippling efforts to mitigate risks. to be sure, the risk society literature has focussed on the social, political and economic dynamics of war, militaristic, and terrorist threats and risk management strategies (for example, amoore and de goede, 2008; beck, 2009; heng, 2006; rasmussen, 2006; rosa et al., 2012; williams, 2008) . in addition, recent riskscape literature has begun to illuminate the reproduction of social vulnerabilities and ensuing uneven redevelopment trajectories following terrorist attacks (gotham and greenberg, 2014) , and it features propositions about the salience of war games and analogous performative exercises and simulations for making future riskscapes present and the target of anticipatory action (neisser and runkel, 2017 ). yet the riskscape literature has not explicitly addressed war and conflict. because riskscapes highlight the temporal and spatial texture of risk and because they shed light on the multiplicity of perceptions and meaning assigned to these risks, adapting the riskscape framework to the study of war and conflict offers great potential. this is particularly the case in the context of climate change, as there is widespread concern and mounting evidence that climate change stresses will exacerbate distributional and political tensions, making wars more likely still (see dunlap and brulle, 2015) . a second gap centres on the issues of displacement and migration. people are on the move. in sympathetic accounts of globalisation, this mobility allows migrants to seek out economic opportunities and more hospitable political contexts to pursue their aspirations. but this geographic mobility reveals the darker side of globalisation. in the first decades of the 21st century, many migrants have been forced to migrate, are fleeing intolerable oppression and are escaping dangerous war zones (hooks, 2020) . the united nations high commissioner for refugees (unhcr) reports an alarming increase in the total number and a rapid increase of "people of concern" . from 1993 to 2003, there were approximately 20 million persons of concern, but this number more than tripled by 2017. as of 2018, there were more than 71 million persons of concern (united nations high commissioner for refugees, 2017). the rate of growth is not only striking but the total now represents a significant share of the world's population. if displaced persons constituted a country, it would rank as the 20th most populous in the world. for both scholarly and substantive reasons, it is unfortunate that the riskscape literature has not been deployed to understand this humanitarian challenge. these mass migrations pose conceptual challenges that the concept of riskscape is well-suited to address. in the riskscapes literature-including the contributions to this special issue-there is a strong tendency to focus on a specific geographic area, the people who reside there, and/or institutional dynamics that contribute to the displacement of people from those areas. displaced people (internal and external) and migrants are moving across riskscapes at a variety of scales across the globe. what risks do they perceive? how do they (attempt to) cope with them? what voice do they have in identifying risks and institutionalising mitigation? how and why do elites, experts and risk management institutions respond to such migrants? taking full advantage of the fluidity and flexibility of the riskscape concept, and its attention to multiple and interlocking risks, would help us better understand this startling increase in displacement and forced migration on their own terms and in relation to climate-induced displacement and migration. indeed, sea-level rise is displacing vulnerable social groups and coastal settlements (hardy et al., 2017; maldonado et al., 2013; shearer, 2012) . further, overtly and covertly racist and nativist state policies, organisations and narratives threaten the lives and livelihoods of climate-change migrants, particularly from the global south, sometimes under the guise of resilience-based climate change adaptation and mitigation (baldwin, 2016; methmann, 2014) . because it is likely that the number of climate refugees will continue to grow in coming decades, these contributions will be all the more valuable in coming years. for example, the amazon rainforest is a riskscape. the indigenous peoples who have long lived in the rainforest are being displaced. these encroachments impose multiple layers of vulnerabilities, threatening their culture, their livelihood and the health. the risks are perceived and can be examined-as the contributors to this special issue have done in a range of settings. but the amazon rainforest is a riskscape with global implications. in the riskscape literature, including several contributors to this special issue, emphasis is placed on the varied meanings and perceptions of risk. in the case of climate change and pandemics, these differences can have global implications. in the amazon rainforest, cutting a tree or clear-cutting a hectare of forest may seem insignificant in the context of a vast-seemingly endless-rainforest. on the grounds that it is emblematic of and a requisite for progress, brazil's president, jair bolsonaro, aggressively promotes and defends this clear-cutting. this clear-cutting may push the deforestation to a tipping point that changes regional weather patterns and the global climate (piotrowski, 2019). furthermore, in the anthropocene epoch, just as climate change can be attributed to human activities, so human activity accounts for the increasing rate of zoonotic spillover (wood et al., 2012) . and, if these practices lead to zoonotic spillover, they could set in motion one or more global pandemics. current risk management organisations and institutions cannot see viruses and cannot detect the spillover from one host to the next. nor can these organisations and institutions immediately perceive the connection between individual acts and the global climate. for both climate change and pandemics, we are at risk of calamity. by the time that the effects are sufficiently "visible and acute" to spur concerted action, "it will, by definition, be too late" (giddens, 2009, 2) . the events of spring 2020 bring to the fore the dynamics and disruption of pandemics. the emergence, impact and aftermath of pandemics intersect with and transform existing riskscapes and the people who navigate them. in her contribution to this special issue, braun (2020) discusses syndemics (a concept emerging from the public health literature) and explores implications for riskscapes. syndemics draws attention to the multiple and overlapping factors that shape health disparities. by weaving in the concept of riskscape, she highlights spatial and temporal processes that reinforce and exacerbate syndemics. while braun's focus was on a megadevelopment project, the theoretical synthesis that she advances provides guidance for understanding the origins, context and aftermath of the covid-19 pandemic. the spillover of viruses from animal to human populations is and has been a threat to human health. before the covid-19 pandemic, this threat was accelerating. over the many millennia that the human species has existed, there were 219 viruses known to infect humans, as of 2012 (woolhouse et al., 2012) . given this modest total, the rate of novel infections in recent years is striking. as reported by the world health organization (2020) there is every reason to believe that this alarming rate of novel disease emergence will continue and may well accelerate. it is estimated that there are over 1.5 million unknown viruses in animal reservoirs; it is believed that over 600,000 (perhaps as many as 850,000) of these viruses have the potential to infect humans (carroll et al., 2018) . various social, political and economic activities are encroaching on and destroying fragile ecologies around the globe, and in so doing, they are at the same time, (i) stressing mammalian and bird populations that are host to hundreds of thousands of viruses, (ii) dramatically increasing interactions between domesticated animals and these mammalian and bird populations and (iii) increasing direct human interactions with these animals and the viruses they host (carroll et al., 2018) . consider the dynamics underway in the amazon basin. vast tracts of the amazon forest are being cleared (often burnt) to make way for large agricultural operations-ranching prominent among them. the amazon rainforest is an ecological hotspot, thousands of species are found in this forest-and only in this forest. as their unique ecosystem shrinks or disappears altogether, animals will be stressed (many will go extinct) and they (and the tens of thousands of viruses they host) will come into sustained contact with livestock and with people. since so few of these viruses have been identified and studied, it is impossible to predict the potential for zoonotic spillover and the emergence of a dangerous pandemic (carroll et al., 2018 )-but it is certain that the risk of spillover is heightened by the destruction of the rainforests and other such biodiverse habitats. moreover, these encroachments and destructive practices-and the associated risks-are taking place around the globe. as with climate change, pandemics such as covid-19 tend to bring less attention to the destructive practices and behavioural changes needed to shift course, and instead draw attention towards technological solutions. managing covid-19, including the closure and reopening of communities, depends on the rate and capacity to develop, manufacture and disseminate technologies including testing capabilities and vaccines. shortages in critical medical equipment like personal protective equipment including n95 masks and respirators exacerbated the medical crisis in countries like italy, and these shortages forced reconsideration of the operation of global supply chains (raffaetà, 2020; zhou, 2020) . while in some instances, political, economic and social institutions may be adequate, the covid-19 pandemic has shed light on the areas where social, political and economic institutions need improvement. further, the covid-19 pandemic may illuminate how communities with low levels of trust and social solidarity may not sustain lockdowns, allowing the virus to spread or rebound (manderson and levine, 2020; raffaetà, 2020) . in the case of climate change, low levels of trust and an inability to commit to and implement shared sacrifice will impede or delay the painful physical (energy transition, re-zoning) and economic measures necessary for mitigation and adaptation. the poor and socially marginalised also have least capacity to work remotely or relocate. further, they have limited financial reserves to overcome the effects of covid-19 (for example, purchase staple goods at inflated prices) and climate change (repair buildings after harsh storms). these concerns are particularly daunting for the communities around the globe that are, (i) currently locked down in response to the covid-19 crisis and (ii) also exposed to overlapping climate-related environmental risks such as flooding, fires, hurricanes and extreme heat events. these events will require considerable institutional flexibility and rapid political response. from the political standpoint, resilience is not an attribute possessed by a community in isolation. in the context of these multiple challenges, resilience will likely depend on communities gaining access to state resources and their needs being anticipated and addressed in state policies. community resilience is of decisive importance. where the challenge is of a global scale-as is the case with covid-19 and climate change-community resilience can be magnified or undermined by the larger state. in the case of covid-19, hard-hit communities cannot secure the inflow of needed medical equipment on their own. in some instances, national-level responses have exacerbated these shortages. in other instances, the national response has procured needed supplies and bolstered local efforts, thereby strengthening community resilience. as effective countermeasure (testing regimes, lockdowns and contact tracing) must be organised at higher levels (state/ province or national), community resilience will be amplified or undermined by the larger political context. in a similar fashion, those communities experiencing the worst effects of climate change may well lack the ability to secure the inflow of resources needed for mitigation (energy reduction and shifts to renewables) and adaptation (anti-flood measures, increased water storage capacity). community resilience can minimise these shortfalls. but, once again, community-level options will be limited or augmented by the larger political context and the state's commitment to systematically address climate effects. as such, the concern for risk goes well beyond the realms of environmental issues and climate change, issues of crime, terrorism, economic (in)security and health equity are increasingly framed in terms of risk, and efforts to mitigate risk. in the immediate context of the covid-19 pandemic, the biomedical impact of the pandemic is transforming riskscapes around the globe. the facility of contagion, the severity of illness and likelihood of death vary by where one lives, who one is, and one's socioeconomic resources. writing in the spring of 2020, it is impossible to predict the long-term impacts of the pandemic (assuming optimistically that a vaccine successfully tamps down infection in 2021 and thereafter). businesses will fail, unemployment has soared and may remain extremely high, the food supply and household-level food security are at risk, and global trade and travel may fall precipitously. each of these developments will play out unevenly across human societies; each will heighten vulnerabilities for many people. together this collection demonstrates, both empirically and conceptually, the relevance of adopting a riskscapes frame when considering climate change risks and their governance. it extends our understanding of riskscapes with respect to territorial coverage, with articles focussing on case studies drawn from diverse territories including india, new zealand, usa, indonesia, canada, germany and lesthoto. conceptually, in several respects, contributors have critically engaged with and have extended the riskscape concept. first, contributors have developed explicit connections between the temporality and spatiality of riskscapes (everts and müller, muller-mann, moure and gebreyes, and white and lawrence) . second, they have displayed a concern with social inequalities and pushed the riskscape literature to come to terms with gender (tickamyer and kusujiarti), race (divalli and perkins, liévanos), indigeneity (yumagulova) and class (divalli and perkins, liévanos, braun) . third, in different ways, each contributor to this special issue displayed a concern for power differences, highlighting the manner in which some individuals, social groups and organisations exert disproportionate influence in the definition of risk and characterisation of risk in time and space. fourth, they have drawn out the linkages between and among understandings of riskscapes, imagining alternatives and social justice. these insights include envisioning a more equitable and more inclusive planning of: (re)insurance markets in florida (taylor and weinkle), megadevelopment projects in africa (braun; muller-mann, moure and gebreyes), climate change mitigation in british columbia (yumagulova) , efforts to anticipate and "fix" climate change risks in new zealand (white and lawrence), urban renewal and gentrification in washington, dc (divalli and perkins), understanding the layers of risks in stockton (california) (liévanos) , coming to terms with climate change risks for small-scale agriculture (braun; müller-mann et al.; tickamyer and kusujiarti) and energy transition challenges (everts and müller) . these insights into social justice further enrich and add texture to the concept of riskscapes. the riskscape literature in general-and contributors to this issue specifically-have emphasised social justice. it is not simply the case that there are distinct, at times incompatible, interpretations of risks and riskscapes. social justice focuses on the institutionalised recognition of risks, steps taken (if any) to mitigate risks and imposition of costs for these mitigation efforts. the articles in the issue draw together lessons from cases around the globe. although riskscapes highlight the unique characteristics and the context of specific places, they also draw together important lessons of governance, planning and socio-ecological engagement that are critical to building resilience at the local, regional, national and global scales. communities need governance structures that are adaptive, inclusive and forward thinking to build resilient systems. they also require economies that empower different segments of society and build long-term value across multiple domains. these systems are essential for crises arising from issues such as climate change, war, displacement and migration or the current covid-19 pandemic. they bring to bear considerations of risk and resilience not only across space, but also layered through time. the contributors offer a forceful reminder that social justice principles cannot be appended after the fact. the covid-19 pandemic is creating and will leave multiple, profound and overlapping scars. recovering from this pandemic will require biomedical reforms, health care enhancements, job creation and economic renewal. as this special issue has emphasised, the recovery from this pandemic will play out across time and space-amplifying or dampening vulnerabilities of extant riskscapes. if these efforts are inclusive and infused with social justice, the post-pandemic social world could be marked by greater resiliency and enhanced social wellbeing. in a similar vein, climate change adaptation and mitigation pose complex and interdependent social and ethical dilemmas. if megadevelopment projects create winners and losers-and they do-global climate change mitigation and adaptation will do likewise, on a much larger scale. calls for "green new deal" resonate because the term references both the environmental (green) and the social justice dimensions (new deal). examining and advocating a calls for a "green new deal" through the lens of riskscapes offers a reminder and a tool to consider the interplay between environmental and social justice interventions across space and time, and from the individual to the national and global scales. the scale of risk: conceptualising and analysing the politics of sacrifice scales in the case of informal settlements at urban rivers in nairobi risk and the war on terror flammable: environmental suffering in an argentine shantytown premeditation and white affect: climate change and migration in critical perspective climate change, human security and violent conflict environmental hazard and institutional betrayal: lay-public perceptions of risk in the san luis obispo county oil spill silent spill: the organization of an industrial crisis risk society: towards a new modernity. thousand oaks world at risk the knowledge-shaping process: elite mobilization and environmental policy environmental change, risk and vulnerability: poverty, food insecurity and hiv/ aids amid infrastructural development and climate change in southern africa the limits to scale? methodological reflections on scalar structuration the global virome project: expanded viral discovery can improve mitigation banking spatially on the future: capital switching, infrastructure, and the ecological fix coronavirus disease 2019: racial and ethnic minority groups. center for disease control and prevention acceptable risk? making decisions in a toxic environment the bottom billion: why the poorest countries are failing and what can be done about it new variant famine: aids and food crisis in southern africa they know they're not coming back": resilience through displacement in the riskscape of southwest an interactive web-based dashboard to track covid-19 in real time climate change and society: sociological perspectives the historical nature of cities: a study of urbanization and hazardous waste accumulation riskscapes, politics of scaling and climate change: towards the postcarbon society superstorm sandy and the demographics of flood risk placing' health in an urban sociology: cities as mosaics of risk and protection catastrophe in the making: the engineering of katrina and the disasters of tomorrow foresight and anticipatory governance understanding social vulnerability to climate change using a 'riskscape' lens: case studies from ethiopia and tanzania the politics of climate change turbulent and mighty continent: what future for europe water, drought, climate change, and conflict in syria crisis cities: disaster and redevelopment in bringing the polluters back in: environmental inequality and the organization of chemical production the economic impact of covid-19 will hit minorities the hardest. personal finance racial coastal formation: the environmental injustice of colorblind adaptation planning for sealevel rise the 'transformation of war debate': through the looking glass of ulrich beck's world risk society war and development: questions, answers, and prospects for the twenty-first century 2020) wars, states and political sociology: contributions and challenges public participation, engagement, and climate change adaptation: a review of the research literature, wires: climate change managing the risks of extreme events and disasters to advance climate change adaptation: a special report of working groups i and ii of the intergovernmental panel on climate change new and old wars: organized violence in a global era climate change in the fertile crescent and implications of the recent syrian drought heat wave: a social autopsy of disaster in chicago the spatial and temporal dynamics of value in financialization: analysis of the infrastructure of carbon markets the cultures of markets: the political economy of climate governance technocratic norms, political culture and climate change governance environmental change and globalization: double exposures racialized uneven development and systemic risk: sea level rise and high-risk neighbourhoods in the impact of climate change on tribal communities in the us: displacement, relocation, and human rights covid-19, risk, fear, and fallout have wars and violence declined? visualizing climate-refugees: race, vulnerability, and resilience in global liberal politics environmental justice and southern california's 'riskscape': the distribution of air toxics exposures and health risks among diverse communities environmental inequality: the social causes and consequences of lead exposure the risk society at war: terror, technology and strategy in the twenty-first century risk governance: coping with uncertainty in a complex world exploration of health risks related to air pollution and temperature in three latin american cities managing the risks of climate change and terrorism the risk society revisited: social theory and governance the social construction of alaska native vulnerability to climate change toward a critical biosocial model of ecohealth in southern africa: the hiv/aids and nutrition insecurity syndemic the war on drugs in colombia: the environment, the treadmill of destruction and risk-transfer militarism cumulative environmental impacts: science and policy to protect communities institutional ecology, 'translations,' and boundary objects: amateurs and professionals in berkeley's museum of vertebrate zoology the riskscapes of re/insurance: mapping contested practices of catastrophe future-making in florida riskscapes of gender, disaster, and climate change in indonesia man-made disasters building adaptive capacity to cope with increasing vulnerability due to climatic change in africa: a new approach united nations high commissioner for refugees (unhcr) continuity and change in national riskscapes: a new zealand perspective on the challenges for climate governance theory and practice in)security studies, reflexive modernization and the risk society think 168,000 ventilators is too few? try three. ideas, the atlantic a framework for the study of zoonotic disease emergence and its drivers: spillover of bat pathogens as a case study human viruses: discovery and emergence world health topics what coronavirus means for the possibility of a carbon-free economy disrupting the riskscapes of inequities: a case study of planning for resilience in canada's metro vancouver region the global effort to tackle the coronavirus face mask shortage. the conversation key: cord-149069-gpnaldjk authors: gomes, m. gabriela m. title: a pragmatic approach to account for individual risks to optimise health policy date: 2020-09-02 journal: nan doi: nan sha: doc_id: 149069 cord_uid: gpnaldjk developing feasible strategies and setting realistic targets for disease prevention and control depends on representative models, whether conceptual, experimental, logistical or mathematical. mathematical modelling was established in infectious diseases over a century ago, with the seminal works of ross and others. propelled by the discovery of etiological agents for infectious diseases, and koch's postulates, models have focused on the complexities of pathogen transmission and evolution to understand and predict disease trends in greater depth. this has led to their adoption by policy makers; however, as model-informed policies are being implemented, the inaccuracies of some predictions are increasingly apparent, most notably their tendency to overestimate the impact of control interventions. here, we discuss how these discrepancies could be explained by methodological limitations in capturing the effects of heterogeneity in real-world systems. we suggest that improvements could derive from theory developed in demography to study variation in life-expectancy and ageing. using simulations, we illustrate the problem and its impact, and formulate a pragmatic way forward. since the detection of aids in the early 1980s, it has been evident that heterogeneity in individual sexual behaviours needed to be considered in mathematical models for the transmission of the causative agent -the human immunodeficiency virus (hiv) 8 . much research has been devoted to measuring contact networks in diverse settings and by different methods, to attempt to reproduce transmission dynamics accurately [9] [10] [11] . meanwhile other equally important sources of inter-individual variation were overlooked. for example, unmodelled heterogeneity in infectiousness and susceptibility led to over-attribution of hiv infectivity to the acute phase 12 and, consequently, to concerns that interventions relying on treatment as prevention might be compromised. the problem of unaccounted heterogeneity in predictive models can be illustrated with the simplest mathematical description of infectious disease transmission in a host population. figure 1 shows the prevalence of infection over time under three alternative scenarios: all individuals are at equal risk of acquiring infection (black trajectories [notice unrealistic time scale]); individual risk is affected by a factor that modifies either their susceptibility to infection (blue) or exposure through connectivity with other individuals (green). risk modifying factors are drawn from a distribution with mean one (blue and green density plots on the left) while the homogeneous scenario is sketched by assigning a factor one to all individuals (black frequency plot). as the virus spreads in the human population, individuals at higher risk are predominantly infected as indicated at endemic equilibrium (figure 1 a, b , c, density plots on the right, coloured red) and after 100 years of control (figure 1 d, e, f). the control strategy applied to endemic equilibrium in the figure is the 90-90-90 treatment as prevention target advocated by the joint united nations programme on hiv/aids 4 whereby 90% of infected individuals should be detected, with 90% of these receiving antiretroviral therapy, and 90% of these should achieve viral suppression (becoming effectively non-infectious). ; distributed susceptibility to infection with variance 10 (b, e); distributed connectivity with variance 10 (c, f). in disease-free equilibrium, individuals differ in potential risk in scenarios b and c, but not in scenario a (risk panels on the left). the vertical lines mark the mean risk values (1 in all cases). at endemic equilibrium, individuals with higher risk are predominantly infected (risk panels on the right, where red vertical lines mark mean baseline risk among individuals who eventually became infected), resulting in reduced mean risk among those who remain uninfected (black vertical lines). to compensate for this selection effect, heterogeneous models require a higher ! to attain the same endemic prevalence (a, b, c). interventions that reduce infection also reduce selection pressure, which unintendedly increases mean risk in the uninfected poll and in heterogeneous models, ( ) is a probability density function with mean 1 and variance 10, and 〈 # 〉 denotes the th -moment of the distribution. gamma distributions were used for concreteness. figure 1 shows that heterogeneous models that account for wide biological and social variation require higher basic reproduction numbers ( ! ) to reach a given endemic level and predict less impact for control efforts when compared with the homogeneous counterpart model. this holds true regardless of whether heterogeneity affects susceptibility or connectivity. at endemic equilibrium, individuals at higher risk are predominantly infected (red distributions have mean greater than one as marked by the red vertical lines), and hence those who remain uninfected are individuals with lower risk (blue and green distributions have mean lower than one as marked by the black vertical lines). thus, the mean risk in the uninfected but susceptible subpopulation decreases, and the epidemic decelerates (thin blue and green curves); higher values of ! are consequently required if the heterogeneous models are to attain the same endemic level as the homogeneous formulation (heavy blue and green curves). finally, interventions are less impactful under heterogeneity because ! is implicitly higher. indeed, these biases could help explain trends in hiv incidence data which lag substantially behind targets informed by model predictions, even in settings that have reached the 90-90-90 implementation targets 3,4 . a novel severe acute respiratory syndrome coronavirus (sars-cov-2) isolated at the end of 2019 from a patient in china has spread worldwide causing the covid-19 pandemic, despite intensive measures to contain the outbreak at the source. countrywide epidemics have been extensively analysed and modelled throughout the world. initial studies projected attack rates of around 90% if transmission had been left unmitigated 13 , while subsequent reports noted that individual variation in susceptibility or exposure to infection might reduce these estimates substantially 14 risk distributions are simulated in three scenarios: homogeneous (black); distributed susceptibility to infection with variance 10 (blue); distributed connectivity with variance 10 (green). left panels represent distributions of potential individual risk prior to the outbreak, with vertical lines marking mean risk values (1 in all cases). as the epidemic progresses, individuals with higher risk are predominantly infected, depleting the susceptible pool in a selective manner and decelerating the epidemic. the inset overlays the three epidemic curves scaled to the same height to facilitate shape comparison. right panels show in red the risk distributions among individuals who have been infected over 4 months of epidemic spread (mean greater than one when risk is heterogeneous, as marked by red vertical lines) and the reduced mean in heterogeneous models, ( ) is a probability density function with mean 1 and variance 10, and 〈 # 〉 denotes the thmoment of the distribution. gamma distributions were used for concreteness. as models inform policies, we cannot but stress the importance of representing individual variation pragmatically. while much is being discovered about sars-cov-2 and its interaction with human hosts, epidemic curves are widely available from locations where the virus has been circulating. models can be constructed with inbuilt risk distributions whose shape can be inferred by assessing their ability to mould simulated trajectories to observed epidemics while accounting for realistic social distancing interventions 6 . variation in infectiousness was critical to attribute the scarce and explosive outbreaks to superspreaders when the first sars emerged in 2002 16 , but what we are discussing here is different. infectiousness does not respond to selection as susceptibility or connectivity do, i.e. models with and without variation in infectiousness perform equivalently when implemented deterministically and only differ through stochastic processes. the need to account for heterogeneity in risk to acquire infections is not restricted to aids and covid-19 but is generally applicable across infectious disease epidemiology models. moreover, similar issues arise in methods intended to evaluate the efficacy interventions from experimental studies as illustrated for vaccines in the sequel. individual variation in susceptibility to infection induces biases in cohort studies and clinical trials. vaccine efficacy trials offer a useful illustration of the problem and give insight into the potential solution. in a vaccine trial, two groups of individuals are randomised to receive a vaccine or placebo and disease occurrences are recorded in each group. as disease affects predominantly higher-risk individuals, the mean risk among those who remain unaffected decreases and disease incidence declines. in the vaccine group the same trend will occur at a slower pace (presuming that the vaccine protects to some degree). as a result, the two randomised groups become different over time with more highly susceptible individuals remaining in the vaccine group. the vaccine efficacy, described as a ratio of cases in vaccinated compared to control group, therefore appears to wane (figure 3) 17, 18 . this effect will be stronger in settings where transmission intensity is higher, inducing a trend of seemingly declining efficacy with disease burden 19 . the concept is illustrated in figure 3 by simulating a vaccine trial with heterogeneous and homogeneous models analogous to those utilised in figures 1 and 2 . selection on individual variation in disease susceptibility thus offers an explanation for vaccine efficacy trends that is entirely based on population level heterogeneity, in contrast with waning vaccine-induced immunity, an individual-level effect 20 . as both processes may occur concurrently in a trial, it is important to disentangle their roles, as they lead to different interpretations of the same incidence trend. for example, vaccine efficacy might wane in all individuals, or it might be constant for each individual but decline at the population level due to selection on individual variation. to capture this in a timely manner requires multicentre trial designs with sites carefully selected over a gradient of transmission intensities (e.g. optimally spaced along the incidence axis in figure c, f) , and analyses performed by fitting curves generated by models that incorporate individual heterogeneity. an alternative and more tightly controlled approach would be to use experimental designs in human infection challenge studies where these are available 21 to generate dose-response curves and apply similar models. these approaches have recently been successfully tested in animal systems 22 . heterogeneities in predispositions to infection depend on the mode of transmission but play a role in all high-burden infectious diseases. in respiratory infections, heterogeneity may arise from a variation in exposure of the susceptible host to the pathogen, or the competence of host immune systems to control pathogenic viruses or bacteria. these two processes have multiple the mechanisms underpinning single factors for infection and their interactions determine individual propensities to acquire disease. these are potentially so numerous that to attain a full mechanistic description may be unfeasible. even in the unlikely scenario that a list of all putative factors may be available, the measurement of effect sizes would be subject to selection within cohorts resulting in underestimated variances 23 . to contribute constructively to the development of health policies, model building involves compromises between leaving factors out (reductionism) or adopting a broader but coarse description (holism). holistic descriptions of heterogeneity are currently underutilised in infectious diseases. recently, measures of statistical dispersion commonly used in economics have been adapted to describe risk inequality in cancer 24 , tuberculosis 25 and malaria 26 , offering a holistic approach to improve the predictive capacity of disease models. essentially, this involves stratifying the population into groups of individuals with similar risk, which may be as granular as individual level for frequent diseases, such as malaria or influenza. for infectious diseases which cluster by proximity, such as tuberculosis, stratification can use geographical units. familial relatedness pertains when there is a clear genetic contribution to risk, such as cancer. by recording disease events in each group, specific incidence rates can be calculated and ranked. unknown distributions of individual risk are then embedded in dynamic models and estimated by fitting the models to the stratified data. because they incorporate explicit distributions of individual risk, these models automatically adjust average risks in susceptible pools to changes in transmission intensity, occurring naturally or in response to interventions. not subject to the selection biases described above, this model approach inherently enables more accurate impact predictions for use in policy development. there is compelling evidence that epidemiologists could use indicators that account for the whole variation in disease risk. heterogeneity is unlimited in real-world systems and cannot be completely reconstructed mechanistically. inspired by established practices in demography and economics and supported by successful applications in both infectious and non-communicable diseases, the use and further development of these approaches offers a powerful route to build disease models that enable more accurate estimates of intervention efficacy and more accurate predictions of the impact of control programmes. an application of the theory of probabilities to the study of a priori pathometry, part i modeling infectious disease dynamics in the complex landscape of global health is the unaids target sufficient for hiv control in botswana? joint united nations programme on hiv/aids (unaids), global aids update elimination of lymphatic filariasis in south east asia herd immunity thresholds for sars-cov-2 estimated from unfolding epidemics impact of heterogeneity in individual frailty on the dynamics of mortality a preliminary study of the transmission dynamics of the human immunodeficiency virus (hiv), the causative agent of aids heterogeneities in the transmission of infectious agents: implications for the design of controls programs networks and epidemic models transmission network parameters estimated from hiv sequences for a nationwide epidemic reassessment of hiv-1 acute phase infectivity: accounting for heterogeneity and study design with simulated cohorts impact of non-pharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand (imperial college covid-19 response team individual variation in susceptibility or exposure to sars-cov-2 lowers the herd immunity threshold a mathematical model reveals the influence of population heterogeneity on herd immunity to sars-cov-2 superspreading and the effect of individual variation on disease emergence estimability and interpretability of vaccine efficacy using frailty mixing models apparent declining efficacy in randomized trials: examples of the thai rv144 hiv vaccine and caprisa 004 microbicide trials clinical trials: the mathematics of falling vaccine efficacy with rising disease incidence seven-year efficacy of rts,s/as01 malaria vaccine among young african children design, recruitment, and microbiological considerations in human challenge studies vaccine effects on heterogeneity in susceptibility and implications for population health management understanding variation in disease risk: the elusive concept of frailty inequality in genetic cancer risk suggests bad genes rather than bad luck introducing risk inequality metrics in tuberculosis policy development modelling the epidemiology of residual plasmodium vivax malaria in a heterogeneous host population: a case study in the amazon basin key: cord-252182-v0cveegl authors: déportes, isabelle; benoit-guyod, jean-louis; zmirou, denis title: hazard to man and the environment posed by the use of urban waste compost: a review date: 1995-11-30 journal: science of the total environment doi: 10.1016/0048-9697(95)04808-1 sha: doc_id: 252182 cord_uid: v0cveegl abstract this review presents the current state of knowledge on the relationship between the environment and the use of municipal waste compost in terms of health risk assessment. the hazards stem from chemical and microbiological agents whose nature and magnitude depend heavily on the degree of sorting and on the composting methods. three main routes of exposure can be determined and are quantified in the literature: (i) the ingestion of soil/compost mixtures by children, mostly in cases of pica, can be a threat because of the amount of lead, chromium, cadmium, pcdd f and fecal streptococci that can be absorbed. (ii) though concern about contamination through the food chain is weak when compost is used in agriculture, some authors anticipate accumulation of pollutants after several years of disposal, which might lead to future hazards. (iii) exposure is also associated with atmospheric dispersion of compost organic dust that convey microorganisms and toxicants. data on hazard posed by organic dust from municipal composts to the farmer or the private user is scarce. to date, microorganisms are only measured at composting plants, thus raising the issue of extrapolation to environmental situations. lung damage and allergies may occur because of organic dust, gram negative bacteria, actinomycetes and fungi. further research is needed on the risk related to inhalation of chemical compounds. in the management of household wastes, the sorting-composting approach presents many advantages: (i) sorting not only provides for the selection of recyclable and compostable materials, it also reduces the volume of waste to be treated by incineration. hence, the putrefiable part represents 50-70% of the weight of the entire municipal solid waste (msw) [1, 2] . (ii) the volume of the putrefiable portion will be reduced during the composting process. (iii) compost is widely utilized in agriculture, especially in europe [3] and its use is also strongly encouraged in the usa [4-61. depending on its degree of maturity and quality, it can be used in vine yards, for mushroom farming (fresh compost), in horticulture (hot-beds with fresh compost), sylviculture, country-side planning (flower beds), preparation of sports fields, or golf courses, maintenance of public or private parks, maintenance of motor-way embankments to cover waste discharge systems, or in the rehabilitation of sites such as mines and sand pits [7-121. the chemical components play important roles in the physical and chemical properties of the soil f13,14]. amendment is more interesting for the improvement of soil characteristics than for the fertilizer value of the compost [15] . indeed, the use of municipal solid waste (msw) influences the water retention capacity, resistance to erosion, density, ph, conductivity and the nutrient content of the soil [15,161. there are three main methods for cornposting: gathering of waste in windrows that are turned at regular intervals, static piles of waste that are aerated by deliberate passage of air within the mass (aerated static piles), and finally by gathering waste materials in a totally enclosed and controlled environment, that is, in a reactor [2,17-201. many organic waste products are used for compost: yard wastes (yw), sewage sludge (ss), municipal solid wastes (msw), industrial and agricultural (wood, animal droppings, etc.) [211. in the present work, we have only studied domestic waste in single composts or accompanied with ss or yw. the compost is not a harmless product; msw may contain a number of contaminants with health or environmental risks. these chemical or biological contaminants may expose different populations to health hazards, ranging from the composting plant workers to the consumer of vegetable products treated with compost fertilizers. for example, the humus part of composts consists of numerous ligands, some of which are more or less irreversibly bound to metallic elements [22] . the metals may be released in the soil, when a change in environmental conditions such as ph occurs during application of the compost [23] ; released metals then become bioavailable for plants. among the toxic elements found in composts, arsenic, asbestos, hexavalent chromium, nickel or pcb have been classified as carcinogens or potential carcinogens [241. finally, composts could be potentially hazardous through the presence of microorganisms [25] . in france, out of 20500000 tons of msw per year, 7% (1435000 t) are treated and transformed into 640000 tons of compost [261. due to the diversity of the populations exposed to the use of compost, the huge mass of products involved, and the potential risk of contamination, it is interesting to evaluate the public health and environmental risks arising from the utilization of compost originating from msw. based on literature surveys, the state of present knowledge with regard to the type and the quantity of contaminants is discussed. the different populations at risk through various routes of contamination are also discussed. in the preparation of this review, various comparable literature data on msw from europe, north america and japan were collected [27, 28] . all the articles were selected on composts of msw origin. on account of the similarity of the problems encountered, our literature review also covered compost made from sediment wastes from water treatment stations and green wastes (gardens, parks, forests, etc.) . for the same reasons, some articles that treated non-composted ss were considered. the quality of a compost determines its sales [29] and a hazard-free utilization, which may be viewed from two points: the agronomic value and the absence of contaminants. many authors have written on the agronomic quality of composts t1,30-347. the contamination of the finished product may come from the primary material, that is, essentially from the content of our garbage bins as well as other composted wastes (yw or ss). metals, for example, are provided by plastics (pigments and stabilizers), batteries (torch and radio), car batteries, electronic components, electric bulbs and their sockets, leather materials, glassware or ceramics [28, 35, 36] . asbestos is found in household refuse because of insulation materials [37] . a number of organic compounds (solvents, grease, pesticides, etc.) find their way into our garbage bins as residues. these are also found in yw (lindane, pcb, pcdd, pcdf), and in ss (pah, pesticides, halogenated hydrocarbons, phthalates, esters, pcb) 138-431. sorting of msw is increasingly used and has proved to be very effective in reducing the contamination of the finished product. mercury, lead, chromium, cadmium, zinc and copper are mostly derived from batteries, glassware, plastics and ferrous materials. elimination of these recyclable components before making the compost leaves not more than 50% of lead and copper, 25-30% of zinc and nickel, which persists in papers/cardboards and is more or less strongly bound to organic materials [27] . sorting may be carried out at the source by the producer or at the waste disposal plant, implemented manually or automatically by special machines, especially in the case of small amounts of plastics and metals, or after cornposting [18, 31, . early sorting ensures lower contamination with organic and inorganic pollutants [28, 44, 47] . biological contamination is also encountered in msw. pathogenic microorganisms are likely to come from dirty discarded cloth, faeces of domestic animals, sanitary tissue papers or putrefying foods [25, 48] . as a result of their origin, contamination exists in ss where one can easily find many strains of bacteria, viruses, fungi and other parasites (table 1) the cornposting procedure is very important. the parameters that must be controlled during this process are: the composition of the mass, aeration, temperature, humidity, carbon/nitrogen (c/n) ratio and ph. some components such as keratinous wastes, papers and cardboards are not easily composted [50] . depending on the method of compost formation that is chosen, the parameters may be controlled to a reasonable extent [50-521. anaerobic conditions may provoke an increase in the duration of cornposting and, more table 1 pathogens that may be found in sewage sludge and in municipal solid wastes (49, 80, 110, 122, 124) importantly, an adverse sanitary condition if the temperature conditions are not fulfilled. a hygienic compost is free of the pathogens that it might have contained. there is an inherent risk in the use of unhygienic composts [8,53-561. it is possible to disinfect them by monitoring the temperature during composting. as can be seen in table 2 , the destruction of the pathogens depends on the temperature reached and the duration of the oxidation process. temperatures of about 55-60°c for at least 3 days are recommended [50, 57, 58] . this has proved effective against salmonella spp. and other parasites [59, 60] . unfortunately, these conditions are table 2 '\ level of temperature and length of time necessary to destroy some pathogens present in primary products of composts (180). lethal temperature not always fulfilled because temperature is dependent upon the degree of oxidation of the entire mass of compost. control of temperature and aeration is not possible for the entire mass of wastes windrow cornposting. the compost is therefore not always disinfected by this method. the use of windrow presents, in addition, the risk of contaminating clean areas with unclean wastes from other parts of the heap infected during turning [31, 51, 61, 62] . it should be noted however that these pathogens are not in their natural environment, that is, in their hosts. this does not favor their survival. if fungi and bacteria strains have the possibility to multiply, viruses and other parasites can at best acquire a form of resistance. the pathogens are forced to compete with other microorganisms which are in their natural environment. these are mostly bacterial strains at the beginning of the composting process, but turn to fungi and actinomycetes by the end, as a result of selection pressure due to increased temperature [61, 63] . the method of composting permits both improvement of the hygienic conditions of the compost and promotes chemical decontamination. during the degradation of the primary wastes, the chemical contaminants are often implicated in the process of transformation. this is supported by laboratory results which show that chlorpyrifos, isofenphos, diazinon (insecticides) and pendimethalin (herbicide) found in plant wastes are totally degraded by composting [64] . other studies have reported a degradation of 6% of chlorinated pesticides and 45% of pcb [38] . the phenomenon is widespread and it has been tried for the rehabilitation of soils by composting [65] . only pollution by organic wastes can be treated by this approach. by contrast, mineral contaminants tend to concentrate during composting by reduction in volume. the process used must encourage a good maturity of compost. maturity refers to the degree of biological, chemical and physical stability of the compost. it can be measured in several ways [l&66-74] . there is a meeting point between the two terms maturity and stability. while maturity defines such aspects of the compost like color, friability and odor, stability on the other hand is based on all the physical, chemical and biological evaluations [74] . the use of an immature compost may present an agronomic problem since it may become toxic to plants [11,631. the degradation of the wastes continues in the soil after application of the compost, with several toxic intermediate metabolites being present, such as phenols, ammonia, acetic, propionic, butyric and isobutyric acids [32, 75] . on the other hand, when immature composts are used, the risks inherent in increasing soil temperature, the competition between plants and microorganisms for available soil nitrogen and a reduction in the level of soil oxygen are equally important unfavorable factors for plant growth. the maturity of compost is also important because of its ability to create nuisance when used. the decomposition of immature compost is completed in the soil under anaerobic conditions, resulting in odor [63] . furthermore, the bioavailability of heavy metals is a function of the degree of maturity of the compost, since the humic material is capable of binding them. experiments have shown that metals become less bioavailable with increasing maturity. this condition, in turn, limits the risk of spreading and hence contamination of the food chain (via plants) and the entire environment [72,76-791. the parameters discussed above are controlled and described differently in laboratory experiments or when field measurements are taken, making a summary of such diverse data difficult. the quantitative data that have been considered in this review are the highest and the lowest values encountered in composts and not averages, since there is a wide variability between data from each article. in addition, the values of the parameters that determine the level of contamination in each compost are not always described in each article. man and his environment are exposed to contaminants from composts during processing, storage and utilization. fig. 1 summarises the different means of contamination and their risk implication. there is limited literature data on storage of compost. oral contamination through contaminated hands is possible for biological and chemical contaminants. this risk is particularly pronounced for children. studies have shown that a child may ingest as much as 100 mg/day of dust from the soil, and when the child suffers from geophagy -or pica -(pathologic exaggeration of the hand-mouth behaviour) this may increase up to 5 g [82, 83] . environmental contamination may result from open air storage of compost or under bad or inadequate protection which exposes it to rain. pollutants are then washed by rain and carried along by water run-off or spread by percolation into the soil. wind may also disperse inadequately stored composts. the storage of immature compost also provokes the emanation of a nauseating odor. application. the applicator may be wounded if sharp objects are left in the compost. compost generates dusts and many particles are suspended in the air during spreading. their inhalation may be dangerous to health since they adsorb both biological and chemical contaminants. dispersion of these dusts and their components in the ecosystem may also constitute an environmental risk. spread compost. composts add chemical or biological pollutants to the soil. these pollutants exist either in a free state or are bound to humus components of the soil. the free chemical contaminants are said to be bioavailable and may be assimilated by plants. there is therefore a potential risk of contamination of the food chain through plants cultivated on such soils and to animals which are fed with these plants as well as to their predators. the external parts of plants are in contact with treated soil. a consumer who does not wash or peel edible materials is thus exposed to the risk of either chemical or biological contamination. the consumption of animals fed with such plants might lead to hazard. under this condition, meat transmits essentially chemical contaminants and parasites. from a different point of view, the free chemi-203 cal and biological contaminants may be washed along by flowing water or rain and may percolate down to groundwaters. this is a means of dispersion in the environment and a threat to man if these infiltrations reach groundwater used as drinking water. the concentration of contaminants in composts, their bioavailability, the recovery rate in plants cultivated on compost-treated soils and in compost sewage systems will now be described. bioavailability of contaminants may be evaluated in different ways using different agents for extraction or binders. bioavailability is calculated from the ratio of the weight of the extract and the quantity of the compound in the compost or treated soil. this knowledge helps in the estimation of the free fraction of metals that may be assimilated by plants or capable of contaminating water bodies or soils. odor resulting from soil treatment with immature compost will not be dealt with here because of the very limited amount of research done in this area [63,84]. only a certain number of compounds selected according to their known toxicity will be discussed in the following section. their general characteristics as well as those of other contaminants are presented in table 3 . cadmium. the amount in compost ranges from 0.26 ppm to 11.7 ppm (fig. 2 , graph a) (unless otherwise stated, all data are in dry weights), although the minimum level is much lower, since about 10 observations did not reach the detection level with the analytical method used [85, 86] . although there is no statistical difference in the quantity of cadmium in compost of diverse origins (p = 0.57), uncomposted sewage sludge contains higher levels (s-2450 ppm) [87-891. bioavailability varies from about 0 to 52% (fig. 2, . for the record, the acceptable maximum concentration of cd in potable water in france is 5 i*,g/l. one does not detect it in leachates originating from compost constituted from plant materials [86] . several authors have considered the assimilation of cd by plants as a result of treatment of soils with compost [88,93-951; cd content of 1.75 ppm was therefore found in beetroot [93] . a cd range from 0.5 ppm to 1.7 ppm was found in beetroot cultivated on soils treated with ss composts or yw. table 4 indicates that assimilation depends on the plant and the level of metals in the soils. similar results have been obtained in other studies carried out in fields treated with uncomposted ss. one month after planting, tobacco plants assimilated 7-10% of cd present in the soil while maize took in only 5% [88] . the effects are long lasting. for example, on a soil treated for 7 years, 0.9-32 ppm cd was observed in beetroot several years after treatment was stopped [95] . a concentration of 50-70 ppm was found in the leaves of maize 14 years after a treatment with ss was stopped [88] . lead. the levels of lead in compost of msw range from 10.6 ppm to 13 212 ppm (fig. 2 , graph b) with a statistically significant difference between composts made partially or completely of msw and those without msw (p = 10m5). the latter contains much lower levels of pb. this may attain 200-20000 ppm in non-composted sewage sludge [28,87-891. its bioavailability is from 4.1% to 75.4% (fig. 2, graph b ). these two extreme values have been measured in two mixtures of soil/compost of urban wastes. the methods of extraction were similar but the ph of both soils was very different. the lowest bioavailability was measured in a soil of ph 8.1 [77] while that of ph 5.2 [941 gave a very high bioavailability value. the release of pb in ss is known to be low [96] . experiments carried out with lysimeters gave contable 3 literature dealing with 14 compounds present in urban compost. . nickel. the usually observed level of nickel in composts, 0.8-1220 ppm, is only slightly dependent on the original primary materials from which the composts were formed (p = 0.17), though the highest values are obtained in composts containing ss (fig. 2 , graph c). the extreme values of bioavailability range from 1.4% to 58.6% (fig. 2, mercury. the only available information deals with the mercury content of composts. no author, to the best of our knowledge, has been interested in its bioavailability and its presence in plants as a result of the application of composts or in leachates. the quantity found in a compost depends on the origin of the compost and there is a significant difference between composts of different origins (p = 0.05). composts of msw have the highest level of mercury which range from 0.9 ppm to 20.3 ppm (fig. 2 , graph e). selenium. the selenium content of composts varies from 0.1 ppm to 8.8 ppm with a difference as a function of their origins (p = 0.01) which may be explained by a high contamination with ss (fig. 2, graph f) . yw and msw composts have similar concentrations of selenium but data are scarce. beetroot grown on soils treated with a msw compost may contain about 0.04 ppm to 0.17 ppm of selenium depending on the origin of the compost [93] . we have no information on selenium in leachates of composts. arsenic. very little information on the measurements of arsenic in compost has been found in the course of this research. the values range from 7 ppm to 9 ppm. the application of msw compost for more than 30 years did not increase the arsenic content of plants cultivated in sites [27, 102] . asbestos. the presence of asbestos has been studied in different experiments [10, 37] . in eight samples of msw composts, fibres were found in all, while 13 observations were positive out of 22 carried out with non-msw composts. in contrast to metals, organic contaminants may be metabolized by microorganisms during the process of composting [38] . only the very stable compounds may persist in the composts because of the length of time of compost formation, the presence of microorganisms and the high temperature. hence, pesticides such as diazinon which has a life span of 12 weeks were not found in compost [64] , while pentachlorophenol which may persist for 5 years in the soil is found in compost [103] . few studies have been made on pollution with organic contaminants, but many compounds have been measured (table 5 ). for the sake of a summary, these substances will be examined in groups, adding also specific examples. the major families treated have been chosen based on their relative resistance to biological degradation, or their toxicity to man, animals and plants, and on their tendency to accumulate in the food chain [104] . unlike the inorganic compounds, organic compounds may be transmitted to plants in two ways: by classical routes through the roots, and through the air into the leaves during vaporization of some of the compounds [104]. pah. their half-life in the soil is long and they are stable even when metabolized in the plants. total pah content in composts ranges from 1 ppm to 250 ppm. for each element, the values range from 0.0006 ppm to 49.3 ppm (table 5) . pesticides. though many organochlorinated pesticides have been banned, they are often found in the environment. composts made of wastes that are sorted early contain very little organochlorinated pesticides [38] . pesticides detected are in the range of 0.007 ppm to 2.2 ppm (table 5 ). chlorinated hydrocarbons. the different substances gathered in this family of compounds range from 0.02 to 1.1 ppm. volatile solvents are among the most important since they are found at a concentration as high as 0.1 ppm. pcdd /f. they are lower in composts of msw that have been sorted early and range from 0.1 to 7 ppm. very small quantities are found in ss (in the order of ppb) [104] . pcb. the quantity of pcb found in composts is about 0.5-5 ppm (table 5 ). in ss, this may vary from 0.5 to 5 ppm. natural substances. in contrast to pollutants (exogenous compounds), natural substances (endogenous compounds) are derived from the compost itself. for example, microorganisms produce fatty acids and methylated esters, but in quantities that do not add more than 0.025-0.05 ppm to the soils during application. these quantities are considered not to be dangerous [105] . the excess soluble organic materials from composts may constitute a nuisance, particularly for flowing water. successive leachates might introduce organic matter into water and may pose a risk of ecological disequilibrium. in a study over 3 years [1061, a decrease of cod (chemical oxygen demand) from 5000 mg/l to 200 mg/l (corresponding to 600 mm of leachates) was observed after 2 years, and bod (biological oxygen demand) represented 10% of cod. given the diversity of the references as to the type of microorganisms and the method of composting, it is difficult to summarize the literature. the aim of this work, therefore, is simply to find a base from which the microbiological risks associated with composting can be evaluated. those who wish more details may go through the original papers. two types of microorganisms are envisaged. first, the pathogens present in raw materials meant for composting and liable to disappear during the composting process (tables 1,2) . these agents are representative of the microorganisms present in the digestive tract. secondly, there are microorganisms which develop during the process of compost formation and which play a role in the degradation of organic matter. these are fungi and mesophilic and thermophilic bacteria. there are several obligatory and facultative pathogens [61] . the organism as well as its spore or toxin (endotoxins of gram negative bacteria), may be implicated in the pathogenicity. hazard essentially occurs through the respiratory system and these germs constitute a potential risk for workers at the composting site and for users of composts, whether workers or private users. table 6 summarises the concentrations of these germs found in the literature, according to their route of penetration (ingestion or inhalation). air measurements have been carried out in tunnels of a mushroom culture house or in composting sites, near mounds of mature compost. the latter will be used, in this review, as an index for the risk resulting from the use of composts because no other data on atmospheric measurements during compost disposal are available. microorganisms which constitute a potential respiratory hazard are more frequent than those which follow the digestive route. these are gram positive (including actinomycetes) and gram negative bacteria and fungi. the microorganisms are essentially bound to dusts produced by composts, especially during turning [117] . about 50-85% of particles in suspension in the atmosphere about composts can be inspired because of their small diameter (< 5 pm), and can therefore reach pulmonary alveoli [12, 116, 126] . in parallel to the microbiological hazard through inhalation, there is also a physical risk due to the deposition of dust in the lungs. in a study among workers, concentrations of 10.6-80 mg/m3 of dust (n = 4) were measured in the atmosphere at a composting site [12, 116, 117] . these values are higher than the occupational standard of 10 mg/m3 [12] . very few studies have been done on organic dusts from composts. total colifonns: 1.5 x lo6 to 5.6 x lo6 cfu/g (n = 31, from 79 to organisms/g (n = 3) (107,110) fecal coliforms: 4.1 x 10' to 4 x lo6 cfu/g (n = 3),9.1 x lo'-2.5 x lo2 organisms/g (n = 8) (61, 107). fecal streptococcus: (n = 12) lo2 to 1.9 x 10' organisms/g, (n = 4). lo2 to 4.7 x lo6 cfu/g (61,62,107,111) (118) 'on two occasions, the authors suggested a sampling error. the two assays, depending on the method used (most probable nu only give a qualitative identification, i.e., < 0.2 organism/g. this review deals with a limited number of contaminants because not all have been studied in compost. the risk associated with ingestion of dust from an amended soil depends on its use. for farming, especially for vegetables and potted farm crops, 50 t/ha of compost is used while 300 t/ha is spread in public places (gardens and green playgrounds). composts are usually mixed with a depth of soil of approximately 30 cm [127] . if the density of soil and compost are comparable, the dilution of the compost in the soil is, depending on the quantities that are disposed, in a fraction of one-hundredth or one-tenth. the level of exposure of an individual depends on the quantity of soil ingested. the best estimate of ingestion of telluric dust by a normal child is about 100 mg and about 60 mg for an adult while a child suffering from geophagy may absorb up to 5 g/day [82, 83] . this route of exposure is, however, modest because the use of compost for soil treatment by private owners or for public gardens and parks currently represents only 2-5% of the total compost production [127] . the hypothesis that an individual is in permanent contact with an amended soil is theoretically possible, but this is not very likely to occur. the maximum quantity of contaminants that are absorbed daily may thus be estimated as: q absorbed ( pg) = maximum concentration of toxicant observed in composts (c, ppm) x dilution of compost in ground x amount soil ingested (g) + amount compost dust in the air x c x amount air inspired (1). as a first approach, aerial contamination may be ignored due to relatively low exposure. compost is dispersed in the air essentially during manipulation. this is a minor route of exposure for the general population. on the other hand, however, this route is not negligible for the workers of the compost production and manipulation sites. the ingested fraction (eda: estimated daily absorption) will hereafter be the main route of exposure considered in this review. its intake will be compared with the acceptable daily intake (adi) for each substance. exposure by ingestion may occur through contamination of the food chain from plants and animals raised on soils treated with composts (animals bred on open fields may ingest soil up to 6% of their daily food ration) [128] . the contamination of underground water by products from amended soils may equally present a human health hazard or an environmental nuisance when the ground water is not used for potable water. 5.1. chemical hazard table 7 summarizes results which help comparisons for inorganic compounds' adis and estimation of the maximum potential exposure associated with compost. the contribution of the ingestion of a mixture of soil and composts to the adis of an adult is shown to range from 1.5 x 10e3 to 4%. it is therefore negligible for all metals. for a normal child, the risk is higher for chromium and lead in the case of direct contact with treated public gardens and parks. the contribution of this type of soil may rise up to l-4%. in the case of pica, four contaminants may be dangerous. for cd, eda may amount to 23% of the adi, 40% for cr and 73% for pb. these are extreme values estimated from the most contaminated composts, and are unusual (fig. 2) . the us-epa assessed the risk associated with the use of ss and considered all the possible routes of contamination. it estimated a corresponding noael (no observed adverse effect level) for some pollutants [3] . according to the data in the literature, only pb occurs in composts at quantities that are frequently higher than the noael (59% of samples described in the literature) (fig. 2, graph b ). in view of the possible dilution in the environment, the concentration of metals encountered in leachates does not suggest a significant risk of contamination of the environment or of drinking water. the nuisances that concern soil, fauna and flora after treatment with composts are not well known, but they are unlikely to yield important risks. only pb is suggested to have an impact on invertebrate fauna in the soil through which the food chain of wild life may be contaminated [98, 129] . though the risks indicated in this review seem to be of little importance, it must be remembered that composts persist in the soil for several years [16, 130] and that repeated applications may lead to an accumulation of pollutants [88, 102] . the table 7 potential contribution of composts to the contamination of the soil (based on the estimation of l/10 for the compost/soii volume ratio). as na: information non available *maximum quantity: concentrations in the most polluted soil after treatment with the most contaminated compost excess relative to the highest %: percentage increase in the concentration of contaminants after soil amendment = (concentration in the least contaminated soil + the most contaminated compost)/(concentration in the least contaminated soil) **the risk associated with the absorption of argricultural soil is not considered ***acceptable daily intake ****no observed adverse effect level. fate of pollutants in treated agricultural soil was studied bi-annually in holland for 30 years, showing an increase in the quantity of cd (3.4 times the baseline level), of as and of cr (x 1.5), ni (x 3.7), hg and pb (x 4). in spite of this, no increases were observed in the levels of cd and ni in cultivated crops. rather, a decrease in the quantity of as was observed while cr and pb increased only by 15% in carrots, beetroot, turnip, pears and beans. however, in another s-year study of the fate of cd and ni in soils amended annually with ss and of their transfer to the leaves of maize plants, a strong increase in the quantities of both metals were observed (50-105%) for cd and 40% for ni in comparison with the levels measured after the first application [88] . the food chain is a potential route of human exposure because most of the composts produced are used in agriculture [127] , but it is difficult to make a global assessment of the risk to the gen-era1 population since the overall agricultural land area where composts are applied is not well known. some hypotheses will be made in order to set the risk scenario. it will be estimated that the dilution of compost in targe and vegetable farm soils is about one-hundredth burden. under this assumption, each inorganic pollutants will be reviewed. cadmium. the body burden of cd comes essentially from food, except among smokers [3, 131] . cd is very persistent in the human body, since its half-life is about 17-30 years [132, 133] . the consequence of an intoxication with cd is primarily renal and hepatic; the bones are also a target after very high contamination (the 'itai-itd episode in japan) [3, 131, 134, 135] . cd found in compost may strongly contaminate food because the relationship between the quantity of cd in the soil and that in plants is linear without threshold [131, 132] . a steeper gradient is found in lettuce (60%) than in irish potatoes (3%) [131, 136] , showing that cd accumulates differently according to the organ of the plants (leaves > roots > fruits and grains) [3, 128] . the quantity of cd in food also depends upon the use of the food. for instance, when wheat is transformed into flour, the quantity of cd is reduced by about 46% due to elimination from the chaff [128] . contamination of the food chain may also come through animals. grass grown on soils with high concentrations of metals may contaminate animal feed. this hazard is apparently minimal for cd because the quantities present in composts are not sufficient to yield high amounts in plants that might accumulate in the muscles of animals. with increasing daily intake, accumulation in the body is minor. a 40-fold increase in cd daily intake in pig does not yield a significant increase in the quantity in the muscle [128,1371. cd normally accumulates in the offal which represents only a small fraction of our food regime (0.3-0.5%). only populations that abundantly eat liver or kidney are seriously exposed [1281. however, there is a direct potential risk for animals grazing on grounds treated with composts. during grazing, animals ingest 3-6% of soil with grass. an increase has been observed in muscular tissues for animals grazing in fields amended with ss containing 8.8 ppm of cd [128] . the concentration observed in compost is much lower (maximum 12 ppm) and the risk is therefore less than when ss is used, even after application for several decades [102] . the case of lettuce is interesting because its cd content depends very strongly on the quantity in the soil [3] . the normal level of cd in lettuce grown in normal agricultural land ranges between 50 and 60 pg/kg (fresh wt.). hence, lettuces cultivated on pure compost (a very unfavorable situation and very unlikely, except for experiments) may contain only 68 pg/kg (fresh wt.) [135] , which is very close to the normal value. risks associated with food also depend on the frequency at which that food is consumed. a low contamination of a food item that is consumed frequently may be more deleterious than high contamination of food that is rarely eaten. unfortunately, though there are data on the average food intakes and on the natural content of metals in food items [135] , data from cd contaminated zones are scarce [138] . lead. pb quantities in composts are apparently not high enough to represent a risk to animals through grazing. furthermore, the risk of contamination to man through the consumption of such animals is low, since, as in the case of cd, pb does not accumulate in meat but in offal 11371. pb is assimilated in small fractions by plants. its penetration in grapes is poor but higher in maize and wheat. the risk of contamination of products made from these crops is reduced by the biological barriers in plants which prevent the penetration of pb in the grains [98, 139] . nickel. ni is found mainly in green vegetable 11401. there are poor data on the transfer of ni to man through plants, and the existing data does not suggest any risk. some studies seem to indicate that at the levels recovered, ni may not pose any phytotoxicity problem [98] . however, this metal is toxic to crops before attaining the toxic dose for man [3] . chromium. data on the risk of cr via food is also scarce; 60% of cr in the food chain is found in plants and an increase in consumption might be dangerous [lo] . cr accumulates mostly in the roots of some vegetables 110,981. mercury. hg is only slightly assimilated from the soil by higher plants (mineral mercury even less than methylated mercury) [98, 141] ; on the other hand, mushrooms accumulate hg very easily [98, 142] . in france, the use of urban compost in mushroom farms has decreased from 60 to 10% in 5 years, and an afnor standard has targeted a reduction from 10 to 5% which will further reduce the risk associated with such practice [127]. it has not been proved if hg accumulates in animals fed on plants grown on compost amended soils [3] . selenium. fruits and vegetables contain on average 0.1-0.6 ppm se [143] . one study on the assimilation of se by plants cultivated on msw compost amended soil did not indicate an increase compared to the normal level 1931. paradoxically, a study on amendment of ss has shown in several species of fruits, vegetables and cereals, that the levels obtained are well below those measured in normal food (4.8-46 ppb) [141] . the maximum concentration observed in soil does not exceed the standard values for agricultural or residential areas (table 7 ) [144] . arsenic. there is only a limited amount of studies on the absorption of as by plants. however, available results suggest that assimilation takes place in the leaves and not through the roots [145] . as to the risk from arsenic, there is no adverse indication regarding the use of compost in agriculture. the potential risks associated with organic compounds have been assessed to a lesser degree. pah. the international reference values for total pahs range l-50 ppm for gardens or residential areas [144] . quantities found in compost may reach 200 ppm which, for a one-tenth dilution, give a non-negligible surplus; this increase is lower in the case of a one-hundredth dilution. these results are due to the high content of certain contaminants (phenanthrene, naphthalene, anthracene, pyrene, acenaphthalene and fluorene) but these compounds are not very toxic. out of the seven pahs that are well known carcinogens (table 5 ) none was found above the standard in the soil after compost was applied. agricultural use of ss (at levels comparable to those of composts) gives only low concentrations of pah in plants, even after a very long trial [104] . while pahs penetrate into plants, they concentrate mainly in the underground teguments and only very little in the aerial parts. peeling and washing before cooking helps to avoid contamination through the food chain [104] . however, the risks might increase during repeated applications. no accumulation of pahs in agricultural soil was noticed after treatment with msw composts for 3 years but the authors of the study admit the inadequacies of the data and the necessity to carry out experiments of longer duration [146] . as regards the risks associated with leachates of composts, 10% of pahs are free in ss (and many thus percolate) [147] . if this fraction holds true in compost and taking into account future dilution, the chances of contamination of water is minute [147] . pcdd / f. though who has recommended an adi of 10 pg/day/kg for 2,3,7,8 tcdd [148] , the ad1 for complex mixtures of compounds is not known. the us-epa has calculated an acceptable exposure dose to pcdd/f for a general population as 6 x 10d9 rig/kg/day [149] . this exposure should not provoke the occurrence of more than one cancer in a population of one million persons. if one considers the least contaminated compost (0.1 ppm) and depending on what the soil is used for, the daily absorbed doses by an adult, a normal child and a child with pica are 40-100000 times the recommended dose. however, this result should be considered with care because there are only a very small amount of measurements on dioxins and furans in composts and it is not possible to assess whether the samples were representative. these compounds are sparingly soluble in water and are strongly adsorbed onto dust and soils. therefore, they accumulate but have low availability 11041. yet, a literature review shows that several surveys reported leaf assimilation of pcdd/f by plants due to their dispersion in air [150] . the transfer of pcdd/f by animals has been reported when grazing animals ingest dispersed contaminated ss [150] . finally, pcdd/fs are not easily bioavailable and, therefore, are not likely to concentrate in leachates. pcb. the maximum quantities recommended in the soil range from 0.05 ppm to 0.5 ppm [144] . the concentrations observed in composts (0.0007-5 ppm), with a dilution of one-tenth, might lead to an increase of the amounts in the soil above standard values and represent a potential qk in the case of ingestion of soil, especially to chrldren. risks through crops are weaker. after dispersion of ss with a pcb level of 52 ppm in a trial including several crop species, it was recovered only from carrot [104] . on the other hand, fields amended with ss containing an average of 292 ppm pcb might represent a risk to farm animals by accumulation of the chemical [151] . it should be noted that pcbs are very persistent products and measures should be taken to control long-term uses. pesticides. the maximum acceptable quantities for pesticides such as aldrin, dieldrin or ddt for agriculture purposes without risk are 0.1 ppm for the first two and 0.75 ppm for ddt [104] . these values are higher than those found in composts. the contaminants often concentrate in the external teguments and peeling might reduce risk 11041. plants possess metabolic pathways which, over a certain length of time, eliminate pesticides from the tissues [104] . diazinon, isofenphos, chlopyrifos and pendimethalin (non organochlorinated) are less persistent and disappear during composting [64] . ch. the non-volatile ch accumulates only slightly in crops grown on compost amended soil, which eliminates the danger inherent in the consumption of these crops [152, 153] . by contrast, the volatile ones (trichloromethane, chloroethylene) may pose a risk since marked cl4 indicated a non-negligible foliar assimilation 11041. unfortunately, these works concern ss and there is no information on the volatilization of volatile organic compounds (voc) in msw compost during use [431. it should be lower than during the application of ss since there are many occasions for loss during the process of composting (heat, turning of the windrow, duration of cornposting). authors have shown that at the composting site, the highest levels of vocs are found near fresh wastes [43] . there are two routes of exposure for pathogens or toxins: ingestion of a mixture of soil/compost and inhalation of microorganisms dispersed in the air during manipulation of composts. microorganisms present in composts do not seem to compete with those in the soil. therefore, spreading of composts is apparently without biological risk to the environment [119]. x2.1. risks associated with ingestion of microoqanisms different authors and institutions have proposed standards for the microbial quality of composts using indicators of contamination as an index. this issue still provokes scientific controversy in terms of relevance [61, 111] . the following have been proposed as limiting values: 5 x lo3 faecal streptococci/g, 5 x 10' enterobacteria/g, absence of salmonella in 100 g, and absence of eggs of parasites [61] . salmonella strains are rarely present in msw compost but more often in compost of ss [125] while eggs of ascaris are absent. with regards to faecal streptococcus (table 6) , seven studies out of 16 which were screened in the literature showed higher concentrations than the recommended values. however, these results are inconsistent and these observations were made under different processes of composting. the slow methods of composting (such as turning of compost in windrow) are less efficient in sanitization of composts [61, 62] . during storage or after application of composts, pathogens may disappear more or less rapidly. survival of viruses depends on humidity, temperature and on the type of strain [122, 154] . they do not seem to concentrate in leachates [122] . some authors reported that it is possible for viruses to penetrate into the plant through the roots and to migrate into the stem [1221. the survival of enteric bacteria is also influenced by humidity and temperature. in the soil, they survive longer in the saturated zone. in an experiment, though no bacterial indicator was detected in the soil before amendment with ss, it took 7 months to eliminate the effect of 6 x lo8 strepto-coccus/g, 1.3 x lo8 total coliforms/g and 0.37 x 10' faecal coliforms/g [80, 122] . during this time span, the paradoxical phenomenon of recolonization of mixtures of soil and compost may take place especially after rain. hence, a compost with a normal rate of indicators may undergo a sudden increase in their concentration (and reach values above the proposed norms) following changes in environmental conditions [1551. the survival of parasites is the longest. eggs of ascaris may persist 3 years during storage of mud and up to 78-107 days in the soil [122, 59] . however, the us-epa has shown that after 5 years of soil amendment with ss that were contaminated by parasites, toxocaru were isolated in 13% of the samples but no ascaris was found [bo] . pathogens may be ingested by children through hand-mouth contact, which is an important route of exposure in cases of geophagy. the composts observed in this review did not contain adequate levels of salmonella as to represent a risk, and parasites were rarely found. the infective dose of e. coli is about lo6 [25] which should not be reached after ingesting a mixture of soil and compost. the infective dose of streptococcus (lo'), however, may be attained in case of pica. atlatoxin is produced mainly by aspergillus jravus and a. parasiticus. this carcinogenic toxin is hazardous when ingested [156, 157] . the impact of aflatoxin is difficult to evaluate because there is only qualitative information on its presence in composts. this route of contamination may be found during utilization of composts which cause air dispersal of microorganisms responsible of the process of cornposting. these are different from the faecal microorganisms which come from a contamination related to the nature of the composted material. apparently, respiratory infection caused by enteric pathogens during air dispersion of composts is negligible. studies on workers at sewage treatment plants (a highly exposed zone) have never shown evidence of disea.se due to faecal pathogens through the inhalation route [108, 158] . among the most frequently studied organisms, aspergillus fimigutus remains a controversial subject. the amounts measured in the air are often high but serological studies carried out on exposed workers did not indicate the presence of circulating antigens [156, 159] . the infective dose of this fungus has not been assessed but was shown to be hazardous only to susceptible persons who are hypersensitive or immunodepressed [57, 112, 113, 117, 157] . by contrast, evidence of higher risks has been shown after exposure to dust containing gram negative bacteria. epidemiological studies carried out among workers in a msw sorting factory, or in wastewater treatment plants, in farms, mushroom farms or in a cornposting plant have shown that symptoms of headache, diarrhea or eye problems are more frequent during massive exposure to factory dust [126,159-1611. gram negative bacteria are pathogenic because of the endotoxin they produce. the activity of the endotoxin does not depend on the integrity of the bacterial cell because it relates to the lipopolysaccharides present in their wall. a fragment of the wall is as dangerous as the whole bacterial cell [115, 162] . a security level of 1000/m3 has been proposed for gram negative bacteria [12] , though some authors claimed that this concentration could provoke an allergic reaction [118] . when endotoxins are measured, several air measurements around composts (lo-14 ng/m3) were comparable to the limit proposed by several authors as levels without effect [115, 162] . the risk associated with actinomycetes is well known to workers in mushroom farms and is referred to as 'mushroom farmers lung' [114, 159] . those germs also cause the 'farmers lung' disease [1631. a massive (108/m3) and sudden exposure to these bacteria during utilization of composts may initiate a sensitization and an allergic reaction [163, 118] with circulating antibodies being measurable 11591. it is therefore plausible that sensitized individuals develop allergic reactions to actinomycetes following exposure to mature compost. some atmospheric measurements during mature compost handling at composting plant gave results similar to those where allergic reactions were observed among exposed workers [126] . as an example, a 52-year-old urban planner was reported to have developed a pulmonary problem 12 h after he had manipulated farmyard waste composts. retrospective reconstitution of his exposure provided the following data in the air: 1.4 x 106-4.7 x 10' cfu/m3 of fungi, 6.3 x 105-7.7 x lo8 &t/m3 of bacteria (with gram negative bacteria in the majority) and 1.3 x lo*-3.7 x 10' spores/m3 [164] . no hazardous yeast strain was found in this review. one of the most studied yeast, cundiuh albicans, was never encountered. extrapolation of these results obtained in the context of professional exposure to the general population using compost should be done with caution. measurements taken at the composting sites are the only data available. hence, the concentrations of the organisms are not necessarily representative of those obtained in natural conditions of use. the amounts that are measured at the plant site represent a mixture of emanations from the different stages of composting. at the beginning of composting, one essentially observes bacterial populations [165] , which are subsequently replaced by fungal populations [53] . a study comparing just mature compost with composts at the salting point, has shown a big decline in gram negative bacteria between the two locations. by contrast, the populations of actinomycetes remained constant [12] . to our knowledge, air measurements during application of composts were not made. therefore, other studies should be made in order to assess this risk more precisely. the hazard associated with chemical contamination of the food chain during agricultural use of composts seems very low. however, application of compost by individuals or during the amendment of public fields (parks, playgrounds) might pose a risk to health and these applications are likely to develop in the future. the risks that were discussed in this review have been assessed using extreme concentrations of contaminants in composts -levels that were found in rare circumstances. the most prominent risks are associated with hand-mouth contact and ingestion by children. a child with geophagy might ingest, under such hypotheses, 730% of the total admissible daily intake (adi) of lead, 400% and 23% for chromium and cadmium, respectively. for a normal child, the intake of lead through ingestion of compost poses a risk for lead, and to a lesser extent, for chromium. the same route of exposure might incur a significant hazard with pcdd/f and pcb but the data are too scarce to make conclusions. repeated application of composts may cause accumulation of contaminants in the soil. this can be prevented by appropriate msw management policies and by the extension of selective collection of msw that should contribute to a reasonable reduction in the contamination of composts, hence reducing the risks. the microbiological hazard arising from fecal contamination is apparently modest, although direct intake of soil contaminated by faecal streptococcus present in the compost might represent a potential danger. exposure to organisms responsible for the composting are difficult to control. some of the fungi and bacteria are direct pathogens or can act through their toxins. the manipulation of composts triggers their aerial dispersion that can induce their inhalation, as shown among workers in composting plants or in mushrooms farms (fresh compost). it is presently difficult to extrapolate these results to populations that do not produce but use msw composts. future studies aimed at assessing the risk associated with inhalation of compost dust should also take into account the chemical hazard caused by molecules adsorbed on dusts. such hazard has never been described to date. the high dilution of composts and of their pollutants throughout the environment (water, air, soil) and the discontinuous exposure of the population create a low risk for use of msw composts by the public. the health risk associated with cornposting seems to be occupational. this work was supported by the ademe (agence de l'environnement et de la maitrise de l'energie), the french environment agency and has been developed by the gridec (groupe de recherche interdisciplinaire sur les d&hets, grenoble, france). eur 14254 en model for humus in soil and sediments chemical properties of soils amended with compost of urban waste physical and chemical properties of soil as affected by municipal solid waste compost application plant quality and soil residual fertility six years after a compost treatment understanding the process municipal solid waste composting: physical and biological processing composting sewage sludge: basic principles and opportunities in the uk windrow composting of agricultural and municipal wastes source separation and collection in germany proton binding to humic substances. 1. electrostatic effects chemistry of metal retention by soil. environ iarc monographs on the evaluation of the carcinogenic risk of chemical to human objectives for the development of composting in france: a strategic approach transfer of inorganic pollution by composts, in composting and compost quality assurance criteria. commission of the european communities publisher, eur 14254 en the impact of separation on heavy metal contaminants in municipal solid waste composts cost consideration of municipal solid waste compost: production versus market price factors influencing the agronomic value of city refuse composts quality of urban waste compost related to the various composting processes criteria of quality of city refuse compost based on the stability of its organic fraction assuring compost quality: suggestions for facility managers, regulators and researchers some environmental problems connected with the use of town refuse compost sources and fates of lead and cadmium in municipal solid waste plastics from household waste as a source of heavy metal pollution. an inventory study using inaa as the analytical technique asbestos in yard or sludge composts from the same community as a function of time-of-waste-collection organic chemicals in compost: how revelant are they for the use of it? in composting and compost quality assurance criteria. commission of the european communities publisher, eur 14254 en fate of organic contaminants during sewage sludge composting clean compost production organische schandstoffe in siedlungsabfallen: herkunft, gehalt und umsetzung in bidden und pflanzen. toxic organic compounds in municipal waste material: origin, contents and turnover in soils and plants level and source of pcdds, pcdfs, cps, and cbzs in compost from a municipal yard waste composting facility potential emissions of synthetic vocs from msw composting recycling at msw composting parameters for sorting/composting of municipal solid wastes, in composting and compost quality assurance criteria. commission of the european communities publisher, eur 14254 en sorting/composting of domestic waste technical brochure on administration of water resources pollution and risk prevention separate collection of compostables diaper industry workshop identifies research needs to minimize environmental impacts the composting process: susceptible feedstock, temperature, microbiology, sanitisation and decomposing technological aspects of composting including modeling and microbiology composting process design criteria. part i: feed conditioning definition of compost-quality: a need of environmental protection compost processes in waste management, commission of the european communities publisher composting process design criteria kinetik der inaktivierung von salmonellen bei der thermischen desinfektion von fliissigmist survival of plant pathogens and weed seeds during anaerobic digestion principle of compostmg leading to maximization of decomposition rate, odor control, and cost effectiveness, in composting of agricultural and other wastes relationship amongst organic matter content heavy metal concentrations, earthworm activity, and soil microfabric on sewage sludge disposal site microbiological specification of disinfected compost comparative survival of pathogenic indicators in windrow and static pile phytotoxins during the stabilization of organic matter degradation of diazinon, chlorpyrifos, isofenphos and pendimethalin in grass and compost humidification index (hi) as an evaluation of the stabilization degree during cornposting organic fertilizer and humification in soil characterization of humified substances in organic fertilizers by means of analytical electrofocusing (ef): a first approach change in organic matter during stabilization of compost from municipal solid waste experimentation of three curing and maturing processes of fine urban fresh compost on open air areas. a study carried out and financed on the initiative of the county council of cotes du nord -france evaluating garbage compost. part ii chemical properties of municipal solid waste composts parasitological study of waste-water sludge compost stability phytotoxicity suppression in urban organic wastes evaluation of heavy metals during stabilization of organic matter in compost produced with municipal solid wastes the influence of composting and maturation processes on the heavy metal extractability from some organic wastes how composting affects heavy metal content hazards from pathogenic microorganisms in land-disposed sewage sludge a methodology for establishing phytotoxicity criteria for chromium, copper, nickel and zinc in agricultural land application for municipal sewage sludge how much soil do young children ingest: an epidemiologic study the development of assessment and remediation guidelines for contaminated soils, a review of the science factor affecting ammonia volatization from sewage sludge applied to soil in a laboratory study survey of toxicants and nutrients in composted waste materials environmental impact of yard waste compost mobility and extraction of heavy metals from sewage sludge effet de l'utilisation de boues urbaines en essai de longue duree: accumuiation des metaux par les v6getaux superieurs incidence de l'bpandage des boues urbaines sur l'apport de chrome alimentaire speciation of heavy metals in sewage sludge and sludge-amended soil chemical characteristics of leachate from refuse-sludge compost leaching of heavy metals from composted sewage sludge as a function of ph cadmium and selenium absorption by swiss chard grown in potted composted materials fate of trace metal in sewage sludge compost cd and zn phytoavailability of a field-stabilized sludgetreated soil study of the organic matter and leaching process from municipal treatment sludge compost: brown gold or toxic trouble? trace element in municipal solid waste composts: a review of potential detrimental effects on plants, soil biota and water quality chemical fractionation and plant uptake of heavy metals in soils amended with co-composted sewage sludge evaluation of heavy metals bioavailability in compost treated soils effect of using urban composts as manure on soil contents of some nutrients and heavy metals results of municipal waste compost research over more than fifty years at the institute for soil fertility at haren/groningen, the netherlands guide for identifying cleanup alternatives at hazardous waste sites and spills: biological treatment bioavailability to plants of sludge-borne toxic organics identification of free organic chemicals in composted municipal refuse leaching from land disposed compost municipal compost: 1. organic matter bacterial and fungal atmospheric contamination at refuse composting plants: a preliminary study health and safety aspects of compost preparation and use occurrence, growth and suppression of salmonellae in composted sewage sludge hygienic quality of sewage sludge compost survival of fecal indicator micro-organisms in refuse/sludge composting using the aerated static pile system the aspergillus fumigatw debate: potential human health concerns levels of aspergillusfumigatus in air and in compost at a sewage sludge composting site mushroom worker's lung: serologic reactions to thermophilic actinomycetes present in the air of compost tunnels airborne endotoxins: an association with occupational lung disease levels of gram-negative bacteria, aspergillus jkmigatus, dust and endotoxin at compost plants dispersal of aspergillusfimigats from sewage sludge compost piles subjected to mechanical agitation in open air airborne microorganisms associated with domestic waste composting microbiological characterization of four composted urban refuses yeast microflora evolution during anaerobic digestion and cornposting of urban waste quantitative assessment of factors affecting the recovery of indigenous and released thermophilic bacteria from compost survival of pathogenic micro-organism and parasites in excreta, manure and sewage sludge identification of thermophilic bacteria in solid-waste composting umweltrevelante schastoffe in miillkomposten determination of pathogen levels in sludge products clinical and immunological findings in workers exposed to sewage dust gestion de la matiere organique, f. dubosc cadmium: a complex environmental problem. part ii: cadmium in sludge used as fertilizer effect of cadmium on the biota: influence of environmental factor long-term effects of quality-compost treatment on soil controlling cadmium in the human food chain: a review and rationale based on health effects toxicologic et s&mite des al.iments. technique et documentation occupational and community exposure to toxic metals: lead, cadmium, mercury and arsenic cadmium in the environment and human health: an overview plomb cadmium et mercure. rapport du conseil superieur d'hygi8ne publique de france, section alimentation cadmium uptake and distribution in three cultivars of lactuca sp translocation of lead and cadmium from feed to edible tissues of swine table de compositions des aliments biochemistry and measurement of environmental lead intoxication toxicologic et hygiene industrielles, ii: les derives mincraux. techniques et documentation mercury and selenium content and chemical form in vegetable crops grown on sludgeamended soil bioaccumulation of hg in the mushroom pleutorus ostreatus selenium in the environment premiere approche pour l'kaluation de la pollution dun site d'ancienne usine b gaz: utilisation de valeurs guides de differents pays atmospheric deposition of trace elements around point sources and human health risk assessment ii: uptake of arsenic and chromium by vegetables grown near a wood preservation factory determination of polynuclear aromatic compounds in composted municipal refuse and compost-amended soils by simple clean-up procedure estimation of the environmental hazard of organochlorines in pulp mill biosludge used as soil fertilizer assessment of health hazards associated with exposure to dioxins. chemosphere environmental toxicology of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans the influence of sewage sludge applications to agricultural land on human exposure to polychrorinated dibenzo-p-dioxins (pcdds) and -furans (pcdfs) polychlorinated biphenyls in digested uk sewage sludge sorption and degradation of pentachlorophenol in sludge amended soils plant uptake of pentachlorophenol from sludge-amended soils f-specific coliphages in disposables diapers and landfill leachates survival of indicator organisms in sonoran desert soil amended with sewage sludge biological health risks associated with the cornposting of wastewater treatment plant sludge health risks of cornposting: a critique of the article 'biological health risks associated with the cornposting of wastewater treatment plant sludge coliforms in aerosols generated by a municipal solid waste recovery system circulating antibodies against thermophilic actinomycetes in farmers and mushroom workers occupational symptoms among compost workers respiratory impairment among workers in garbage-handling plant biological health risk associated with resource recovery, sorting of recycle waste and composting un risque respiratoire nouveau: les stations d'bpuration et les installations de compostage organic dust exposures from compost handling: case presentation and respiratory exposure assessment actinomycetes as agents of biodegradation in the environment -a review produ$o de fertilizante organic0 por compostagem do lodo gerado por estacoes de tratamento de esgotos chemical toxicity of metals and metalloids composition of toxicants and other constituents in yard or sludge composts from the same community as a function of time of waste collection leaching from land disposed compost municipal compost: 3 inorganic ions fertilizing value and heavy metal load of some composts from urban refuse heavy metal levels and their toxicity in composts from athens household refuse chemical and physico-chemical characterization of vermicomposts and their humic acid fractions effect of the application of municipal refuse compost on the physical and chemical properties of a soil the llip side of compost: what's in it, where to use it and why changes in atp content, enzyme activity and inorganic nitrogen species during cornposting of organic wastes hygienische untersuchungen an einzelbetriebmlichen anlangen sowie einer grobtechnischen anlagen zur entseuchung von flissigmist durch aerob-thermophile behandlung. forum stldte-hygiene anaerobic cornposting saves waste from landtill sludge cornposting maintains growth comment on: 'acid digestion for sediments, sludge, soils and solid wastes. a proposed alternative to epa sw 846 method 3050 when is compost 'safe evaluation de certains additifs ahmentaires et contaminants, 41""' rapport du comite mixte fao/oms d'experts des additifs aliientaires copper and zinc concentrations in edible vegetables grown in tarragona province key: cord-009417-458rrhcm authors: luce, judith a. title: use of blood components in the intensive care unit date: 2009-05-15 journal: critical care medicine doi: 10.1016/b978-032304841-5.50082-0 sha: doc_id: 9417 cord_uid: 458rrhcm nan most patients admitted to an intensive care unit (icu) require the administration of one or more blood components during their stay. such patients exhibit great diversity in conditions necessitating care in the icu, age, underlying medical problems, and integrity of physiologic compensatory mechanisms. all these patients, however, share the need for optimized oxygen-carrying capacity and tissue perfusion. ongoing blood loss resulting from injuries, surgical wounds, invasive monitoring equipment, and blood sampling requirements, coupled with inadequate marrow function and, in some, red cell destruction, makes red cell transfusion a necessity for many icu patients. additionally, many patients are susceptible to the development of hemostatic disorders requiring the administration of such blood components as plasma, cryoprecipitate, or platelet concentrates. blood components should be considered drugs because they exert potent therapeutic responses yet are also capable of causing signifi cant adverse effects. the food and drug administration (fda) regulates blood component preparation, testing, and administration. 1 unlike pharmaceutical agents, however, blood components have fewer objective indications for use and no therapeutic index relating dose to safety. it is not as simple to monitor the effi cacy and continuing need for a blood component as it is to determine the blood level of a drug. in addition, the risks associated with transfusion cannot be known in advance and may be lethal; such risks include medical errors, as well as infectious and immunologic hazards. unlike pharmaceutical agents, these prescribed products require documentation of patient consent and indication for use. although the american blood supply is now safer than ever before, zero-risk transfusion is not achievable, even if blood components could be sterilized. the process of donor selection and screening has become increasingly stringent, an evolution that began in response to the welldefi ned risks of transfusion-transmitted hepatitis and human immunodefi ciency virus (hiv) infection. although the value of maximizing recipient safety is unarguable, increasing donor selectivity has its price. as more tests are added and more conditions placed on the donor, the number of usable donations has declined. this trend has led to occasional regional and seasonal blood shortages and, rarely, outright inability to provide certain blood components. clinicians who prescribe blood components must be aware of these uncertainties in availability and contribute by using blood products appropriately while the national blood banking system seeks strategies to ensure an adequate, safe blood supply. donor screening strategies to ensure recipient safety take several forms. 1, 2 american blood donors are voluntary donors; cash payment was eliminated in the 1970s after studies linked professional donors with transmission of hepatitis. confi dential questionnaires were initiated to limit transmission of hiv and hepatitis and to allow voluntary self-exclusion and involuntary exclusion of donors who pose an increased risk of transmitting infectious agents. multiple specifi c serologic and biochemical tests are performed to detect the potential for transmission of hiv and other retroviruses, hepatitis, and syphilis. any donor who indicates high-risk behavior or who tests repeatedly positive is placed on a permanent deferral list. some patients may insist on blood obtained from relatives or friends. this practice is termed directed or designated donation. these selected donors must undergo the same rigorous questioning and testing as volunteer donors. some studies have found an increased frequency of hepatitis markers in the blood of directed donors when compared with blood drawn from unselected volunteers, but others suggest that designated donors may be no different from new volunteers. 3, 4 there continues to be no consensus about whether directed donors are, as a group, as safe as volunteer donors. 5, 6 institutional policies about the acceptability and processing of directed donations vary widely. in any case, supporting icu patients who require large-volume transfusion with directed donations is unlikely to be advantageous or practical. the basic principle of blood component therapy is prescription of the specifi c blood product needed to meet the patient's requirement. a single whole blood (wb) donation can be separated into its composite parts, or components, which can be distributed to several recipients with differing physiologic needs. component therapy thus meets the clinical requirements of increased safety, efficacy, and conservation of limited resources. as the variety of blood product components increases, however, the complexity of transfusion medicine also increases. a wb donation is typically separated into red blood cells (rbcs), a platelet concentrate, and fresh frozen plasma (ffp) within hours of its collection. the plasma may be further processed into cryoprecipitate and supernatant (cryopoor) plasma. one unit of wb measures approximately 500 ml, including 63 ml of citrate anticoagulant/preservative solution. each unit of wb supplies about 200 ml of rbcs and 300 ml of plasma for volume replacement. wb is refrigerated for 21 to 35 days, depending on the preservative used. after less than 24 hours of refrigerated storage in this preservative and bag system, platelet and granulocyte function is lost. with further storage, levels of the "labile" coagulation factors v and viii decrease. 7 some blood centers offer modifi ed wb, which is produced by removal of the platelet or cryoprecipitate fraction and return of the supernatant plasma to the red cells. this permits provision of the more labile components to patients with specifi c needs, with the remainder forming a product having a composition essentially the same as cold-stored wb. however, the growing need for specialized blood components has resulted in processing the majority of blood donations into components, thus limiting the availability of wb and modifi ed wb. rbcs, or in common usage, "packed" red cells (prbcs), are the blood component most commonly transfused to increase red cell mass. prbcs are derived from the centrifugation or sedimentation of wb and removal of most of the plasma/anticoagulant solution. if collected into citrate-phosphate-dextrose-adenine solution, the volume is approximately 250 ml, the hematocrit (hct) is 70% to 80%, and the storage life is 35 days. extended additive solutions permit storage up to 42 days but increase the volume to 300 ml and decrease the hct to 60%. these extended storage units are commonly used and easier to transfuse because of lower viscosity, but they may pose a problem because of their larger volume. the transfusion of leukocyte-reduced rbcs may benefi t certain patients. transfusion of blood components containing leukocytes may lead to febrile reactions, a greater propensity for alloimmunization, platelet alloimmunization, and transmission of pathogens carried by leukocytes, such as cytomegalovirus (cmv). leukocyte reduction, as defi ned by the fda, requires fi ltration of the blood component by a special fi lter. 1 filtration may be performed either at the time of blood donation and processing or later at the time of transfusion ("bedside fi ltration"). filtration before storage conveys the benefi t of removing white blood cells (wbcs) before they can deteriorate and elaborate cytokines and other unwanted substances during storage. 8 because of proven and theoretical benefi ts of leukocyte reduction of blood components (discussed later in the section covering the adverse effects of transfusion), many european countries and canada require that all transfusions be leukocyte reduced, a process called universal leukoreduction (ulr). some institutions in the united states have also made that decision, but either method of leukocyte reduction adds signifi cantly to the cost of each transfusion ($25 to $30), and the benefi ts of this measure when applied globally have yet to be quantifi ed. 9 washing prbcs involves recentrifuging to remove the plasma/preservative solution from the unit. however, washing may take an hour or more, limits subsequent storage time, and causes some loss of rbcs. washing is also not an effective method of leukoreduction. there are very few indications for the use of washed rbcs, although some recipients with plasma reactions may benefi t. prbcs can be frozen in cryoprotective solution and stored for extended periods. frozen rbcs are generally limited to units of special value, such as those with a rare rbc antigen profi le or autologous blood donations that need to be stored for future use. a rare-donor registry of frozen prbcs exists to assist in providing blood to patients with complex or multiple alloantibodies to red cell antigens. signifi cant advanced planning is necessary to acquire and thaw frozen prbcs for transfusion, thus limiting their use in acute situations. wb and prbcs suffer some cell loss during storage. the current technology of bag and preservative solutions attempts to optimize cell quality and quantity by using strict criteria to determine the length of allowable storage time. nonetheless, as red cell metabolism decreases progressively, a "storage lesion" results, 10 with accumulation of a variety of undesirable substances and loss of cellular function. over time in storage, a slow rise in the concentration of potassium, lactate, aspartate aminotransferase, lactate dehydrogenase, ammonia, phosphate, and free hemoglobin and a slow decrease in ph and bicarbonate concentration occur. cytokines and infl ammatory mediators such as interleukin-1, interleukin-6 and tumor necrosis factor also accumulate. the ph of freshly stored blood in citrate solution is 7.16, which declines to approximately 6.73 at the end of the unit's shelf life. as potassium leaks from red cells during storage, levels as high as 25 meq/l may result. however, each unit transfused supplies at most 7 meq of potassium, which is well tolerated under most circumstances. during the storage period there is also a progressive decrease in rbc-associated 2,3-diphosphoglycerate (2,3-dpg) and adenosine triphosphate (atp). 10 a decrease in 2,3-dpg increases the affi nity of hemoglobin for oxygen, which shifts the oxygen dissociation curve to the left and decreases oxygen delivery to tissues. there is little evidence, however, that this transient increase in oxygen affi nity has clinical importance. after infusion, 2,3-dpg gradually increases as the transfused red cells circulate, with 25% recovery in 8 hours and full replacement by 24 hours. 11 decreased atp during storage diminishes the viability of red cells after transfusion and is one of the chief factors limiting storage time. there is no currently available storage or rejuvenation solution that optimizes these cellular constituents. the majority of blood transfusions are in the form of prbcs, the component indicated for normovolemic patients or those for whom intravascular volume constraints are necessary. the use of wb may be desirable for patients who require both increased oxygen-carrying capacity and volume resuscitation because of a large and ongoing hemorrhage; however, the availability of wb is generally limited. resuscitation is effectively achieved with the use of prbcs and crystalloid solutions. each unit of prbcs or wb is expected to raise the hemoglobin level by 1 g/dl and the hct by 3% in stable, nonbleeding, average-sized adults. although some studies have demonstrated a slight superiority of fresh wb over components when used during cardiac surgery in selected patients, 12 the benefi ts of fresh blood remain controversial, and current testing and processing requirements limit general availability. despite a long tradition of transfusion of rbcs in critically ill patients, the precise indications for transfusion remain a source of controversy, and specifi c transfusion practices may vary widely among clinicians. before the major randomized studies of rbc transfusion policies, a survey of transfusion practice showed that about half of icu patients were receiving red cell transfusions, 13 and another showed that if the icu stay was longer than a week, the rate of transfusion was 85%. 14 the total number of transfusions was high, and icu practice was characterized by high rates of transfusion. 15 the reasons for the controversies are clear: rbcs should be transfused only to enhance tissue oxygen delivery, but the underlying physiology of anemia, the complex adaptations to anemia, and the potential advantages and disadvantages to particular groups of patients are not as well understood. compensatory mechanisms for acute and chronic anemia are diverse and complex. 16, 17 all work in concert to maintain oxygenation within the microcirculation. cardiovascular adjustments leading to increased cardiac output include decreased afterload and increased preload resulting from changes in vascular tone, increased myocardial contractility, and elevated heart rate. lowered blood viscosity permits improved fl ow of erythrocytes within capillaries. blood fl ow is redistributed to favor critical organs with higher oxygen extraction. pulmonary mechanisms, though contributing relatively little to shortterm oxygenation demands, exert potent effects on related metabolic variables. finally, the hemoglobin molecule can undergo biochemical and conformational changes to enhance unloading of oxygen at the capillary level. all these mechanisms contribute to an "oxygen reserve" capacity that exceeds baseline requirements by approximately fourfold. 16 no experimental model exists that encompasses the diversity of physiologic compensations for hypoxia. experiments carried out in animals and case reports in patients refusing transfusion indicate that an extremely low hct is tolerated if tissue perfusion is adequate. [18] [19] [20] certain objective, though indirect, measurements of tissue oxygenation exist and are available to clinicians caring for patients monitored invasively in the icu. mixed venous oxygen content (pv o 2 ) and cardiac output can be measured in patients undergoing pulmonary artery catheterization; arterial oxygen content can also be measured directly. the oxygen extraction ratio (er) can be calculated directly, and in the presence of normal or high cardiac output it is a measure of tissue oxygen extraction and, indirectly, the adequacy of tissue oxygen delivery. the total body er at baseline is about 25%. a falling pv o 2 and an er increasing to greater than 50% have been proposed as indicators of the need for red cell transfusion. 21 there have been only 10 randomized trials of transfusion policy in the icu, and only 1 of them was large enough to draw specifi c, statistically signifi cant conclusions. 22 the canadian critical care trials group compared a liberal (target hemoglobin, 10 to 12 g/dl) with a restrictive (target hemoglobin, 7 to 9 g/dl) red cell transfusion policy in patients stratifi ed for disease severity. at 30 days from randomization, the restrictive strategy was at least as good as, if not better than (p = .11) the liberal strategy, and overall hospital mortality was signifi cantly lower in the restrictive strategy group (p = .05). for patients younger than 55 years and for patients with lower (<20) apache (acute physiology, age, and chronic health evaluation) ii scores, the restrictive strategy was clearly superior. in addition, liberal transfusion was not associated with shorter icu stays, less organ failure, or shorter hospital stays; longer mechanical ventilation times and cardiac events were more frequent in the liberal strategy group. a later subgroup analysis of patients with cardiovascular disease, though small enough to have statistical doubt, suggested that a more liberal transfusion strategy was probably appropriate for patients with severe ischemic coronary disease. 23 this observation has some support in experimental studies of the effects of anemia in laboratory animals with coronary occlusion. 24 the canadian study has highlighted the many and complex issues involved in transfusion decision making in the icu. since publication of the canadian study, several large reports have examined the use of red cell transfusions in critical care units. vincent and colleagues 25 surveyed european icus and found that the transfusion rate in 3534 patients was 37% during the icu stay and 12.7% after the stay. the mean pretransfusion hemoglobin level was 8.4 g/ dl. corwin and colleagues 26 studied 284 icus in the united states a year later and found great similarity: nearly 50% of patients received transfusions, and the mean threshold hemoglobin level was 8.6 g/dl. a single large scottish teaching hospital reported a more parsimonious practice: the rate of transfusion was still 52% in its icu patients, but the total volume of blood used was slightly smaller and the mean pretransfusion hemoglobin level was only 7.8 g/dl. 27 all these authors have concluded that icu practice has not fully embraced the guidelines of the canadian clinical trial. in contrast, 18 hospitals in australia and new zealand have reported on transfusion in 1808 consecutive icu admissions, and although the authors found a median pretransfusion hemoglobin concentration of 8.2 g/dl, the rate of transfusion was lower, at only 19.7% of patients, 60% of whom were bleeding. 28 the "inappropriate" transfusion rate was 3%. the authors speculate that the practitioners may have been infl uenced by publication of the canadian study and their own regional survey of transfusion practices. nonetheless, they agree that full implementation of the canadian guidelines in their clinical setting might be controversial. the literature on rbc transfusion in the setting of surgery, particularly surgery with the use of blood products, is growing. a mounting body of data illustrate the human tolerance of a low hct during and after surgery. a recent randomized trial of rbc transfusion strategy in orthopedic surgery demonstrated no signifi cant differences in outcome between a restrictive (8 g/dl) and a liberal (10 g/dl) transfusion threshold and included monitoring for silent myocardial ischemia preoperatively and postoperatively. 29 provided that adequate perfusion of the microcirculation is maintained, purposeful maintenance of a low hct during surgery, a technique called normovolemic hemodilution, 30 can be a powerful tool in minimizing blood loss and the attendant need for red cell transfusion. table 80 -1 summarizes guidelines proposed by the national institutes of health, 31 the american society of anesthesiologists, 32 and the american college of physicians 33 relative to the transfusion of rbcs. these guidelines have been provided with the intent of establishing parameters, not with the intent of substituting for the individual clinician's judgment. the art of medical decision making in transfusion, as in other areas of medicine, lies in determination of the appropriate treatment for the individual patient. a platelet concentrate (random-donor platelets) is obtained by centrifugation from a unit of donated wb. each unit contains a minimum of 55 ã� 10 10 platelets suspended in about 50 ml of plasma. platelets are stored at room temperature to avoid loss of function from refrigeration and are constantly agitated to maximize gas exchange. the length of storage varies with the container used, but most systems permit 5-day storage. because of this limited storage time and the increasing demand for this component, platelets are often subject to supply shortages. some loss of viability and platelet numbers occurs during storage, but 5-day-old platelets still effect hemostasis. once the bags are entered for pooling before transfusion, the platelets must be administered within 4 hours. each unit of platelets is expected to increase the platelet count by 10 ã� 10 9 /l in a typical 70-kg adult. the usual dose is 6 units, or 1 u/10 kg of body weight. a 1-hour post-transfusion platelet count should be obtained to determine the adequacy of response. the following equation, which relates platelet number and body size to the post-transfusion increment, can be used to assess the effectiveness of the transfusion: abo-compatible platelets are desirable but not essential. when abo-mismatched platelets are given, removal of some of the incompatible plasma can be carried out at the time of pooling for transfusion. likewise, volume reduction may be necessary for patients at risk for fl uid overload from the 300 to 500 ml of plasma present in 6 to 10 units of platelets. nonetheless, the remaining plasma is a good source of stable coagulation factors and contains diminished but still potentially benefi cial amounts of factors v and viii. there is no contraindication to the use of rh-positive platelets in rh-negative patients; if given to women with future childbearing potential, rh immune globulin (rhig) may be used prophylactically against the small risk of rh alloimmunization from red cells that may be contained in the platelet concentrate. plateletpheresis (common terms: single-donor platelets, apheresis platelets) involves separating and removing platelets from one donor by cytapheresis during a 1 1 / 2 -to 2-hour procedure on an automated device and then retransfusing the remainder of the blood back into the donor. each collection contains an equivalent of 6 to 10 units of platelet concentrates. single-donor platelets are suspended in about 300 ml of plasma, so the same abo and volume considerations discussed earlier pertain. single-donor platelets offer the clear benefi t of reducing the risk of multiple-donor exposure to the recipient. single-donor platelets may also be the only available alternative for recipients who have been alloimmunized by previous platelet transfusions because they may be human leukocyte antigen (hla) or platelet antigen matched to the recipient. the use of apheresis platelets now exceeds the use of pooled random-donor platelets; however, use of this product in emergency situations is limited by the availability of volunteer donors. 35 platelet transfusions are indicated for patients bleeding because of thrombocytopenia or functional platelet defects. 36 guidelines for transfusion continue to evolve, and the current guidelines merely provide a desirable range for platelet counts, assuming normal platelet func-tion (table 80 -2). there is ample evidence that bleeding medical or surgical patients with platelet counts of 50 ã� 10 9 /l or above will not benefi t from transfusion if thrombocytopenia is the only abnormality. for critical invasive procedures in which even a small amount of bleeding could lead to loss of vital organ function or death, maintaining the platelet count at 50 ã� 10 9 /l or greater is typically preferred. the presence of other factors that diminish platelet function, such as certain drugs, foreign intravascular devices (e.g., intra-aortic balloon pump or membrane oxygenator), infection, or uremia, may alter this requirement upward. patients at risk for small but strategically important hemorrhage, such as neurosurgical patients, may need to be maintained at counts of 80 to 100 ã� 10 9 /l. patients without hemorrhage who have platelet counts of 5 ã� 10 9 /l or lower appear to be at increased risk for signifi cant hemorrhage. indications for transfusion to patients with counts above 10 ã� 10 9 /l are less well established; thus, the majority of guidelines propose prophylactic platelet transfusion to prevent hemorrhage at a threshold of 10 ã� 10 9 /l. the bleeding time is not a useful procedure in this situation because it is usually prolonged at counts below 80 ã� 10 9 /l, may be insuffi ciently reproducible, and correlates poorly with the risk for bleeding. 37 patients undergoing cardiac bypass surgery experience a drop in platelet count and often acquire a transient platelet functional defect from damage associated with the bypass apparatus. 38 most patients do not experience platelet-associated bleeding, however, so prophylactic transfusion in the absence of bleeding is not warranted. in a patient who continues to bleed postoperatively, more likely causes are a localized, surgically correctable lesion or failure to reverse heparinization. if these conditions are excluded, empiric transfusion of platelets may be justifi ed. patients thrombocytopenic by virtue of immunologic destructive processes such as idiopathic thrombocytopenic purpura (itp) receive little benefi t from platelet transfusions because the transfused platelets are rapidly removed from the circulation. in the event of life-threatening hemorrhage or an extensive surgical procedure, transfusion may prove benefi cial for its short-term effect. transfusion may be accomplished effectively by pretreatment with high-dose immunoglobulin or high-dose anti-d antiserum (rhig). 39, 40 platelet transfusion has been reported to be deleterious in thrombotic thrombocytopenic purpura (ttp), 41 in the related hemolytic-uremic syndrome, and in heparin-induced thrombocytopenia. cautious administration, in cases of life-threatening thrombocytopenic bleeding only, is prudent. prophylactic platelet transfusion for thrombocytopenia secondary to underproduction remains controversial. the common practice of transfusion to maintain the platelet count above 20 ã� 10 9 /l derives from data published in 1962, which demonstrated an increase in spontaneous bleeding in leukemic patients at that level. 42 however, critical evaluation of the data reveals that serious hemorrhage was not greatly increased until counts fell to 5 ã� 10 9 /l or lower and that these patients received aspirin for fever, which might have compromised platelet function and enhanced the bleeding. a somewhat more recent study quantitating stool blood loss in aplastic anemia patients defi ned a bleeding threshold at platelet counts of 5 to 10 ã� 10 9 /l. 43 a prospective study of a more conservative transfusion protocol found that major bleeding episodes occurred on 1.9% of days with counts of less than 10 ã� 10 9 /l and on only 0.07% of days with counts of 10 to 20 ã� 10 9 /l. 44 the trigger for prophylactic platelet transfusion in the 5 to 10 ã� 10 9 /l range, however, applies primarily to stable thrombocytopenic patients. factors such as fever, use of anticoagulant or antiplatelet drugs, and invasive procedures must be considered when generating a treatment plan for individual patients. patients experiencing rapid drops in platelet count may be at greater risk than those at steady state and thus may benefi t from transfusion at higher counts. benefi ts to the patient with more judicious use of platelet transfusion include decreased donor exposure, which lessens the risk of transfusion-transmitted disease; fewer febrile and allergic reactions that may complicate the hospital course; and the potential delay or prevention of alloimmunization to hla and platelet antigens. 45 the development of refractoriness to platelet transfusions is a serious event heralded by a falling cci. poor response to platelet transfusions can be seen in patients with other reasons for platelet consumption, including splenomegaly, fever, trauma and crush injury, burns, disseminated intravascular coagulation (dic), concomitant drugs, or transfusion of platelets of substandard quality. 46 these factors should be sought and corrected if possible. alloimmunization is characterized by the development of anti-hla or platelet-specifi c antibodies, with resultant immune platelet destruction. as many as 70% of patients receiving multiple red cell or platelet transfusions become immunized. 45 leukocyte depletion of transfused components can prevent or delay this phenomenon, but it is important to use leukoreduced components early in the course of transfusion therapy. 45, 47 when patients fail to achieve expected increments after platelet transfusion, provision of abo-specifi c platelet concentrates that are less than 48 hours old may improve the response. if no improvement is seen and the aforementioned medical conditions are excluded, the patient should be screened for hla antibodies or be hla typed and provided with hla-compatible single-donor platelets. alternatively, platelet crossmatching with the patient's serum can be carried out. there is no advantage to unmatched singledonor platelets in this situation. standard ffp is prepared by centrifugation of wb and is frozen within 8 hours of blood donation. 1,2 ffp may be stored frozen for 1 year. the usual volume is about 250 ml, depending on the donor's hct. the most common method of thawing before transfusion is soaking in a 37â° c water bath, which requires about 30 to 45 minutes. once thawed, ffp can be stored refrigerated for a maximum of 24 hours. when prepared and stored in this manner, ffp supplies all the constituents in the amounts normally present in circulating plasma, including stable and labile coagulation factors, complement, albumin, and globulins. by convention, the coagulation factors are present in concentrations of 1 u/ml. crossmatching to the recipient is not performed, but ffp must be abo compatible. standard ffp is as likely to transmit hepatitis, hiv, and most other transfusion-related infections as cellular components are. new ffp products have recently been introduced in response to concern about the transmission of infectious diseases. one such product is solventdetergent-treated ffp. 48 solvent-detergent treatment is a means of viral inactivation that removes the infectivity of lipid-enveloped viruses, such as hepatitis b and c and hiv. because the product is derived from pooled plasma, with as many as 2500 donors in each lot, it has the potential to actually increase recipient exposure to pathogens not inactivated by the solvent-detergent method, such as hepatitis a and parvovirus b19, and be more vulnerable to any newly emerging non-lipid-enveloped agent. a variety of other techniques for reducing pathogen exposure in ffp have been developed, including exposure to low ph or vapor heating and treatment with ultraviolet irradiation, gamma irradiation, or psoralens and light to inactivate pathogens by inducing dna damage. 49 because none of the ffp products is entirely free from the risk of disease transmission or other adverse effects and because infection-reducing modifi cations add significantly to the cost of the components, ffp should be used judiciously. 50 it should be administered only to provide coagulation factors or plasma proteins that cannot be obtained from safer sources. ffp is commonly used to treat bleeding patients with acquired defi ciency of multiple coagulation factors, as in liver disease, dic, or dilutional coagulopathy, or to treat patients with congenital defi ciency of a coagulation factor or other protein for which concentrates or safer sources do not exist. ffp may be indicated for emergency reversal of the coagulopathy induced by warfarin anticoagulants when more concentrated products are not available or for the provision of protein c or s in patients who are defi cient and suffering acute thrombosis. ffp should be administered as boluses as rapidly as feasible so that the resulting factor levels allow hemostasis. the use of ffp infusions without adequate bolus administration is not helpful. ffp should not be used for volume expansion or wound healing or as a nutritional source of protein. ffp does not reverse anticoagulation induced by heparin and in theory might exacerbate bleeding by supplying more antithrombin, heparin's cofactor. prophylactic administration of ffp does not improve patient outcome in the setting of massive transfusion or cardiac surgery unless there is bleeding with an associated documented coagulation abnormality. 51, 52 patients do not usually bleed as a result of coagulation factor insuffi ciency when the international normalized ratio (inr) is less than about 2.0, and even then the results are not always predictable. 53 the partial thromboplastin time (ptt) is not useful in predicting procedural bleeding risk. 54 ffp is often requested prophylactically before an invasive procedure when the patient exhibits mild prolongation in coagulation studies. most of these procedures may be carried out safely without transfusing ffp. 53, 55 ffp is probably the most misused blood component, as illustrated by retrospective surveys. 56 coagulation factors are normally present in the blood far in excess of the minimum levels required for hemostasis. as little as 10% of the normal plasma concentration of several factors will effect hemostasis. conversely, ffp treatment of acquired multiple defi ciencies, as in hepatic failure, is often ineffective because many patients cannot tolerate the infusion volumes required to achieve hemostatic levels of coagulation factors, even transiently. 57 the plasma half-life of transfused factor vii is only 2 to 6 hours. it may be impossible to administer suffi cient ffp every few hours without encountering intravascular volume overload. finally, in some instances, transfusion of seemingly adequate volumes may still fail to correct the coagulopathy. 58 careful documentation of both the need for ffp and the adequacy and outcomes of therapy is essential. 59 cryoprecipitate is manufactured by thawing and centrifuging ffp below 6⺠c and resuspending the precipitated proteins in about 15 ml of supernatant plasma. 1,2 each bag is a concentrated source of factor viii (80 to 120 units), von willebrand factor (vwf) (50% of original plasma content), fi brinogen (250 mg), factor xiii (30% of original plasma content), and fi bronectin. cryoprecipitate offers the advantage of transfusing more specifi c protein and less total volume than the equivalent dose of ffp does. it has been used to treat patients with inherited coagulopathies, such as hemophilia a, von willebrand disease, or factor xiii defi ciency. in the critical care setting, it is more commonly used to replenish fi brinogen, especially in bleeding patients with hypofi brinogenemia caused by dilutional or consumptive coagulopathy. cryoprecipitate also reportedly improves hemostasis in uremic patients, presumably by reversing the functional platelet defect, 60 but desmopressin acetate (ddavp) 61 or conjugated estrogens exert similar effects and should be used preferentially to avoid potential transfusion-transmitted disease. the usual dose of cryoprecipitate to treat hypofi brinogenemia is 10 bags/units to start, then 6 to 10 bags/units every 8 hours or as necessary to keep the fi brinogen level above 100 mg/dl. each bag/unit of cryoprecipitate carries a risk of disease transmission equivalent to that of 1 unit of blood. for this reason, commercial factor viii concentrates, recombinant or treated to inactivate viruses, are preferred over cryoprecipitate for treating hemophilia a patients. immune serum globulin (ig), rhig, and hyperimmune globulins for diseases such as hepatitis b and varicella zoster are obtained by fractionation of pooled plasma, followed by chromatography, delipidation, and other steps to remove aggregates and infectious agents. intravenous ig (ivig) is available in solution or lyophilized form, with protein content varying by mode of preparation. the available products vary slightly in the amounts of iga and igm contained in them, which are mostly present in only trace quantities. ig preparations can be used to provide passive antibody prophylaxis or to supply ig in certain immunodeficiency states. hyperimmune globulins may be used to treat active infections in immunosuppressed hosts. recent applications have exploited ig's immunomodulatory effects in treating a wide variety of disorders with an immune basis. the specifi c mechanism of action of ivig in such conditions has not yet been identifi ed, but possibilities include interference with macrophage fc receptor function, neutralization of anti-idiotypic antibodies, and interference with the incorporation of activated complement fragments into immune complexes. a recent review more completely discusses the effects of ivig on the immune system and its potential uses. 62 rhig is prepared from pools of plasma obtained from donors sensitized to the red cell antigen d from the rh group. the standard-dose vial contains primarily igg anti-d, with a protein content of 300 âµg in 1 ml. this dose will protect against 15 ml of d + red cells or 30 ml of wb. 63 rhig carries no risk of virus transmission. although rhig is used primarily in obstetrics, it may also be indicated to prevent alloimmunization in rh-negative patients receiving small amounts of rh-positive red cells, as in platelet concentrates. routine prophylaxis against large numbers of red cells, as in a unit of rh-positive wb or prbcs given by accident to an rh-negative recipient, is not reliable and usually involves the administration of large amounts, but instances of its effective use in these circumstances have been reported. higher doses of intravenous rhig have been used in the treatment of itp. plasma-derived colloids include human serum albumin (hsa), available in 5% and 25% solutions, and plasma protein fraction (ppf), available in a 5% solution. both are derived from pooled donor plasma but are essentially pathogen-free. hsa is composed of at least 96% albumin, whereas ppf is subjected to fewer purifi cation steps and contains at least 83% albumin, with correspondingly more globulins. the 5% solutions are iso-oncotic, whereas the 25% solution of hsa is hyperoncotic and requires infusion with crystalloid solutions. potential clinical indications for colloid solutions include hypovolemic shock, hypotension associated with hypoproteinemia in patients with liver failure or protein-losing conditions, as a replacement solution in plasma exchange or exchange transfusion, and to facilitate diuresis in fl uidoverloaded hypoproteinemic patients. albumin solutions are not indicated as a nutritional source to raise serum albumin. their use in some indications, particularly for resuscitation, has become controversial, and pulmonary edema has been reported in association with their infusion. 64 although albumin solutions are reasonably safe products to administer, expense and limited availability restrict their use. anaphylactic reactions have been reported in less than 0.1% of recipients. the use of ppf has been associated with severe hypotensive episodes, with hageman factor fragments or prekallikrein activator being demonstrated, 65 thus making ppf a less desirable resuscitation fl uid and contraindicated in cardiac surgery. granulocyte concentrates for transfusion are obtained from a single donor by cytapheresis methods, which generally involve the administration of hydroxyethyl starch and corticosteroids to the donor to improve granulocyte yield. granulocyte colony-stimulating factor (g-csf) has been added to some collection regimens and increases both cell counts and granulocyte survival substantially. each collection should contain at least 10 10 granulocytes 1,2 and is suspended in approximately 200 ml of plasma. a signifi cant number of red cells are present, so crossmatching for the recipient is required. because of the potential risk for graft-versus-host disease (gvhd), granulocytes are usually collected from hla-matched donors. granulocytes are stored at room temperature and must be transfused within 24 hours of collection, although sooner is better because of rapid deterioration of the cells. patients who may benefi t from granulocyte transfusions include those who are neutropenic (absolute neutrophil count of less than 0.5 ã� 10 9 /l) and those who are unresponsive to appropriate antibiotic treatment but in whom bone marrow recovery is expected to occur. a course of therapy generally involves daily infusion for 4 to 7 days. granulocytes have been used for progressive fungal infections in immunosuppressed granulocytopenic patients, in patients with defective leukocytes (e.g., chronic granulomatous disease), and in the neonatal icu for neonatal sepsis. randomized trials had suggested that granulocyte transfusions under these circumstances can reduce mortality, but such trials have not been conducted for more than 2 decades. 66 effective antibiotic regimens and the signifi cant adverse effects associated with the use of granulocyte concentrates, including pulmonary insuffi ciency related to alloimmunization and cmv infection, have limited their use in recent years. the decision to transfuse blood components, like any therapeutic maneuver, must be made with full awareness of the potential risk to the recipient, as well as the expected benefi ts. public expectations of a zero-risk blood supply help raise the acuity of physicians' decisions. for some patients, the benefi t from transfusion is so obvious that the associated risks pale in comparison to the consequences of withholding transfusion. however, the clinician's knowledge of the incidence and management of adverse reactions to transfusion is vital, not only to ensure the best patient care but also to provide appropriate patient education and true informed consent. almost every patient who receives an allogeneic blood transfusion will experience some adverse reaction if such universal effects as immunomodulation and bone marrow suppression are considered. measurable reactions to transfusion occur in about 20% of patients; more serious adverse responses may be expected in only 1% to 2% of transfusions. 67 the nature of these adverse reactions ranges from those that are common but clinically unimportant to those that may cause signifi cant morbidity or death (table 80-3) . transfusion in the icu is a common and often lightly regarded event. however, because the signs and symptoms of severe, life-threatening reactions are frequently indistinguishable from those of troublesome, but less signifi cant reactions, every transfused patient who experiences a signifi cant change in condition, such as an elevation in temperature, change in pulse or blood pressure, dyspnea, or pain, must be promptly and fully evaluated to identify the cause of the reaction and to institute treatment when necessary. the basic approach to all acute reactions should be to maintain a high index of suspicion for acute hemolytic reactions by stopping the transfusion immediately, maintaining venous access with intravenous fl uids, and informing the blood bank laboratory immediately so that the appropriate transfusion reaction protocol can be in stituted and post-transfusion specimens obtained. early recognition of severe transfusion reactions may be lifesaving. the most feared reaction to blood transfusion is intravascular hemolysis, caused by the recipient's complementfi xing antibodies attaching to donor rbcs with resultant rbc lysis. abo incompatibility is most often implicated in these incidents. intravascular hemolysis is still the single most common acute cause of fatalities associated with the transfusion episode. 68 in addition to hemolysis, complement activation stimulates the release of infl ammatory mediators and cytokines and thereby leads to hypotension and vascular collapse. activation of the coagulation system may result in dic. acute renal failure may also occur, presumably on the basis of immune complex interactions. morbidity and mortality are directly related to the quantity of incompatible blood transfused, which is why prompt recognition and cessation of transfusion cannot be overemphasized. misidentifi cation of the patient, or "clerical error," at any time beginning with the process of specimen acquisition through release of the unit and initiation of infusion is the major cause of acute intravascular hemolysis. 69, 70 this reaction is more likely to occur in critical care settings, such as the icu, operating room, and emergency department, than anywhere else in the hospital. it is far preferable to transfuse uncrossmatched group o red cells than to chance abo incompatibility caused by improper patient and specimen identifi cation procedures. the most common clinical sign of hemolysis is fever, with or without chills. 71 other common signs and symptoms include back or fl ank pain, anxiety, nausea, lightheadedness, dyspnea, and hemodynamic instability. in a comatose or anesthetized patient, many of these symptoms will not be evident; therefore, signs such as hypotension, hemoglobinuria, and diffuse oozing from puncture sites or incisions may be the only notable features. immediate management of hemolytic transfusion reactions must include cessation of the transfusion; the remainder of care is supportive. rapid verifi cation of patient and unit identifi cation must be made, not only to confi rm the suspected reaction but also to prevent a second patient from receiving a reciprocally incompatible unit if a clerical error has been made. desired end points of supportive care include maintenance of blood pressure, high urine output, and support of coagulopathy or further blood loss. steroids, heparin, or other specifi c pharmacologic interventions have no role in treatment. anaphylactic reactions to blood transfusions are fortunately rare but may be life-threatening. the usual cause is recipient antibody to a component of plasma that the patient lacks, most commonly antibody to iga in igadefi cient individuals. 72 signs and symptoms include severe malaise and anxiety, fl ushing, dizziness, dyspnea, bronchospasm, abdominal pain, vomiting, diarrhea, hypotension, and eventually shock. fever and hemolysis do not occur. management includes immediate cessation of transfusion and standard therapy for anaphylaxis. if anti-iga antibodies are determined to be the cause of this reaction, the patient must receive blood components donated by iga-defi cient individuals or, if unavailable, specially prepared washed rbcs and platelet concentrates. plasmaderived preparations, such as albumin, and ig contain varying amounts of iga and pose a substantial risk in these patients. febrile nonhemolytic reactions (fnhrs) are the most commonly occurring immediate transfusion reaction. these reactions are annoying to the clinician, patient, and transfusion service alike in that they can cause signifi cant discomfort and, because they share certain manifestations with acute hemolytic reactions, must be investigated in every instance. fnhrs occur in approximately 0.5% to 1.0% of transfusion episodes. 73 the etiologic factors are probably complex and multiple, but many reactions are caused by the release of cytokines and pyrogens, either within the transfused unit of blood or as a result of recipient antibodies to donor leukocytes. clinical signs include fever, with or without chills, usually beginning 1 to 2 hours after the start of the transfusion but occasionally delayed up to 4 to 6 hours. multiparous women and patients who are multiply transfused are particularly prone to fnhrs. the transfusion must be stopped and the appropriate transfusion reaction evaluation instituted. antipyretics such as acetaminophen may be administered. though commonly used, antihistamines such as diphenhydramine are neither preventive nor therapeutic. once acute hemolysis is excluded, transfusion of a new unit may be instituted. most patients will not experience a second such reaction. 73 if repeated reactions become problematic, leukocyte-depleted blood components may be supplied. the implementation of ulr results in a reduction in the frequency of all fevers seen after transfusion by only about 12%. 74 hives and pruritus are relatively common adverse effects of transfusion. 68 they are a hypersensitivity reaction localized to the skin, and their cause is unknown but may include both donor and recipient characteristics. these reactions consist of localized or generalized urticaria beginning shortly after the start of transfusion without other signs or symptoms of anaphylaxis or hemolysis. the transfusion should be temporarily stopped, and antihistamines may be administered. if the hives resolve in a short time, the same unit of blood may be cautiously restarted. if repeated urticarial reactions occur, premedication with antihistamines may be effective, or blood components washed to remove plasma may be required. intravascular volume overexpansion is particularly likely to occur in critical care patients with limited cardiac reserve. aside from the inherent volume of the blood components, the intravenous normal saline concurrently administered adds to the volume load. unfortunately, normal saline solution is the only intravenous fl uid that may be administered with blood components. with careful attention to transfusion requirements and the use of volume reduction maneuvers available to the transfusion service, volume overload can be minimized in most instances. the frequency of this complication of transfusion is not reported. delayed hemolysis is an uncommon but probably underrecognized reaction to transfusion that results from the stimulation of a primary or secondary (anamnestic) recipient antibody response to foreign rbc antigens. these antibodies are undetected at the time of transfusion but increase after transfusion in a manner analogous to the vaccination "booster" effect. these reactions typically occur 3 to 14 days after transfusion but are unrecognized because of the lack of a clear temporal association with transfusion. fever, chills, and an unexplained decline in hct are the usual signs. 75 transient elevation in bilirubin and lactate dehydrogenase may also occur. the diagnosis is established by a positive direct antiglobulin (coombs) test resulting from recipient antibody coating donor rbcs. the antibody may be identifi ed by eluting it from the rbcs or by demonstrating it within the recipient's serum. the specifi city of the antibody is often against such rbc antigens as the rh family, kidd, duffy, or kell systems. hemolysis may not occur, but if it does, it is likely to be extravascular and only rarely causes renal failure or dic. prevention of these reactions is diffi cult. alloimmunization to foreign rbc antigens occurs in approximately 1% of transfusions. 67 detection of delayed antibodies is the purpose for requiring a new blood bank specimen every 72 hours if the patient has recently been transfused. permanent transfusion records should record the occurrence of delayed antibodies, even though they may not be apparent at a later crossmatch. access to transfusion databases is critical for the care of patients with a past history of transfusion. transfusion-related acute lung injury (trali) is an uncommon (0.02%) 76 but serious adverse effect of transfusion that has only recently been gaining recognition. similar reactions have been called pulmonary leukoagglutinin reaction or noncardiogenic pulmonary edema. these reactions consist of acute respiratory distress syndrome (ards), which develops 1 to 6 hours after transfusion. signs and symptoms include bilateral pulmonary infi ltrates, hypoxemia, fever, and occasionally hypotension. monitored patients are found to have normal or low pulmonary wedge pressure and central venous pressure, as contrasted with patients experiencing volume overload. if adequate respiratory support and oxygenation are established promptly, spontaneous resolution generally occurs within 1 to 4 days. deaths have nonetheless occurred, particularly with a delay in diagnosis. 77, 78 episodes of trali appear to have several possible causative mechanisms. some cases may be caused by donor antibodies reacting with recipient neutrophil or hla antigens. 79 plasma factors related to blood storage have also been implicated, such as lipid substances from deterioration of donor cell membranes that prime recipient neu-trophils, which then damage the pulmonary vasculature and lead to increased capillary permeability and an ardslike syndrome. 80 other clinical factors may contribute to increased risk, such as cardiac bypass surgery or other procedures. in the antibody model at least, the implicated antibody is unique to the donor and the affl icted recipient will probably not experience another such reaction, provided that the recipient is not exposed to the same donor. trali is undoubtedly under-recognized in the critical care setting and may frequently be confused with fl uid overload or cardiogenic pulmonary edema. transfusion-associated gvhd (ta-gvhd) is a welldocumented, but probably under-recognized, highly lethal immunologic complication of blood transfusion. 81 immunocompromised patients infused with blood components containing viable donor lymphocytes are at risk for engraftment of the allogeneic lymphocytes and ensuing rejection of recipient (host) tissues. transfusion recipients who are at highest risk include neonates, especially the very premature, bone marrow and organ transplant recipients, and leukemia and lymphoma patients. ta-gvhd has also been reported in patients after cardiac surgery who received designated donor blood from relatives; presumably, the hla antigenic differences between donor and recipient were insuffi cient to stimulate a recipient immune response but suffi cient to elicit a donor immune response. 82 the onset of ta-gvhd is usually within 8 to 30 days after transfusion, and it is manifested as fever and rash, followed by diarrhea and evidence of liver and bone marrow injury. ta-gvhd differs from that seen in bone marrow transplantation (bmt) by its involvement of the marrow and by far greater mortality. treatment is largely ineffective, and mortality exceeds 90%. irradiation of blood components at 25 gy prevents ta-gvhd by eliminating the donor lymphocyte mitogenic response. all cellular blood components should be irradiated before transfusion to high-risk patients. the functions of the cellular components of blood are unaffected, although damage to rbc membranes limits postirradiation storage of prbcs. 83 blood donated by a relative for any patient should be irradiated, as should hla-matched or crossmatched platelet products. allogeneic blood transfusion has been shown to modulate and suppress the recipient's immune response, an effect fi rst noted with kidney transplantation. 84 immunosuppression in a critical care setting is generally undesirable, but whether transfusion has a signifi cant impact is debated. ongoing clinical issues center around two areas of controversy: the putative association between blood transfusion and increased numbers of postoperative infections and increased and more rapid rates of tumor recurrence in surgical oncology patients with certain malignancies. there has been no resolution of either issue despite a few prospective trials having been performed. the largest pro-spective trial of colorectal cancer resection, for example, is negative, 85 but a meta-analysis of the extant data suggests that an adverse effect on recurrence does exist. 86 similarly, most of the randomized trials of postoperative or critical care unit infections are too small to indicate an effect of transfusion, but all point in the direction of an adverse effect. 87, 88 controversy will continue until larger randomized trials are conducted. the precise mechanism of the immunosuppression induced by allogeneic transfusion has not yet been delineated, and several mechanisms may be involved. 89 alterations identifi ed in laboratory and clinical transfusion recipients have included depression of the t-helper/tsuppressor lymphocyte ratio, decreased natural killer cell activity, diminished interleukin-2 generation, formation of anti-idiotype antibodies, impairment of phagocytic cell function, and chronic persistence of donor lymphocytes (microchimerism), suggestive of low-level gvhd. difficulties in analysis of human data arise because patients requiring blood transfusions have conditions that themselves induce immune changes. there is some evidence, bolstered by the results of two large clinical trials, to suggest that leukocyte reduction of blood components reduces or eliminates this immunosuppressive effect. 90 proponents of this viewpoint argue that for this reason, ulr would benefi t most patients receiving blood transfusions and lead to fewer infections, tumor recurrences, and other related putative risks of transfusion, all potentially resulting in saving lives and cost. prospective trials will be extremely important. 91 public awareness of transfusion-associated acquired immunodefi ciency syndrome (aids) has done more to revolutionize transfusion practice than any other transfusion risk by resulting in more conservative blood use, more stringent donor selection criteria, and improved screening tests. the result is that viral transmission rates are now diffi cult to measure, and the risk of transfusionrelated infectious diseases is lower than ever. 92 the current best estimate is that 3 to 4 units per 10,000 will transmit some kind of infection 93 if agents such as cmv or epstein-barr virus are included. bacterial infection has become the most common infectious risk thanks to increasingly sensitive donor screening tests, including nucleic acid testing (nat) to detect viral dna or rna, which has shortened the infectious period and reduced the risk for post-transfusion hepatitis (pth) and other viral infections. several fatalities are reported yearly from the transfusion of blood components contaminated with viable, proliferating bacteria, with or without the accumulation of endotoxin. 94 platelet concentrates, because they must be stored at room temperature, are particularly prone to bacterial growth, with a reported incidence of 6 in 10,000 transfusions. 95 organisms isolated from platelets and implicated in fatal transfusion reactions include staphylococcus and streptococcus species and gram-negative bacilli. fatalities resulting from bacterial contamination of refrigerated rbcs have occurred as well and more often involve cryophilic bacteria. rbc transfusions contaminated by yersinia enterocolitica have been consistently reported for a decade. 96 transfusion reactions caused by bacterial or endotoxin contamination are fortunately quite rare, but mortality exceeds 60%. signs and symptoms of reactions caused by microbial contamination overlap those of hemolytic transfusion reactions and consist primarily of fever and hypotension, along with other signs of endotoxic shock. if recognized promptly, a gram stain of the implicated unit can be prepared immediately and, if positive, appropriate antibiotic and supportive therapy instituted. autologous blood components may also be contaminated at the time of collection; therefore, reactions occurring in patients who are receiving their own blood should not be dismissed but instead should be evaluated as fully as though the patients had received allogeneic blood. the success of viral screening measures is most clearly illustrated by the fall in the risk for pth over the past 2 decades. although pth continues to be a signifi cant cause of morbidity and mortality, the nature of pth has changed through the years with the stepwise institution of various donor screening measures. the elimination of paid donors in 1972 and the successive introduction of immunologic tests for hepatitis b have resulted in a steady reduction in the rates of pth caused by hepatitis b virus (hbv) to approximately 17 per million units of transfused blood products. although about 30% to 40% of hbv transmissions will result in acute hepatitis, chronic hbv infection develops in less than 10% of such patients. in contrast, the risk for chronic hepatitis c virus (hcv) infection after transfusion is higher, nearly 50%, and the long-term risk for cirrhosis-or hepatocellular carcinoma-related mortality is about 15% over more than 20 years after pth secondary to hcv. 97, 98 the clinical course of hepatitis a is generally milder, and the lack of a chronic carrier state means that with donor screening for symptoms of the acute illness, the risk of transmission is much lower, estimated at less than one in a million units. 99 the prevalence of hepatitis b surface antigenemia among fi rst-time blood donors is 0.7%, and the prevalence of hepatitis c antibodies in donors is approximately 0.1% to 0.5%. at this time, given the sensitivity of current screening assays, including the latest generation of enzyme immunoassays (eias) and nat, the current risk of pth resulting from hcv is believed to be about 1 in 150,000 or less. 100 although hbv is still implicated in pth (attributable to the seronegative "window" period in newly infected donors), the risk of transfusion-associated hepatitis b is about 1 in 200,000 units. 100 retroviruses, rna-based viruses characterized by their reverse transcriptase and integration into the host genome, and lentiviruses, a subset of retroviruses, are ubiquitous in animals and were initially identifi ed in humans in the early 1980s. those known to be capable of transmission by transfusion are hiv-1, hiv-2, and human t-cell leukemia/lymphoma virus (htlv) i and ii. transfusion-associated aids was initially reported in late 1982. 101 the fi rst report of an associated viral agent did not appear until late in 1983, and in march 1985 the screening enzyme-linked immunosorbent assay (elisa) to detect antibody to hiv-1 was licensed and immediately incorporated into the blood-screening process. improved confi dential donor screening appeared to decrease the risk of infectious units appearing in the donor pool. 102, 103 the discovery that heat treatment reduced transmission resulted in a reduction in transmission by plasma products, especially to persons with hemophilia. clinical aids developed in more than 90% of recipients of infected blood products, and the vast majority succumbed to the disease. removal of donor units with seropositivity by elisa was insuffi cient to prevent transmission of hiv-1; several hundred cases were reported annually after introduction of the elisa test. subsequent development of an assay for the p24 antigen and then nat has lowered the risk of transfusion-associated hiv-1 infection to less than one in a million (see table 80 -3). despite donor screening and sensitive assays, including eia, nat, and p24 antigen, an extremely small, but fi nite risk of hiv-1 transmission by screened blood transfusions remains. this risk is largely due to the seronegative "window" period experienced by newly infected donors, which is estimated to be an average of 16 days. 100 a second retrovirus, hiv-2, fi rst described in residents of countries in west africa and subsequently detected in migrants to western europe, causes an immunodeficiency syndrome similar to that caused by hiv-1. although very few cases of hiv-2 have been reported in the united states 104, 105 and there have been no reported transfusion-transmitted cases, experience with other retroviruses suggests that screening may prevent the majority of potential transmission. therefore, donated blood is now screened by an assay for the presence of antibody to hiv-2. the retrovirus htlv-i is the causative agent of adult t-cell leukemia (atl) and is strongly implicated in the chronic, progressive neurologic disorder termed tropical spastic paraparesis or htlv-i-associated myelopathy (tsp/ham). htlv-ii has been linked to hairy cell leukemia, but no transfusion-transmitted cases have been reported. the virus exhibits strong serologic crossreactivity with htlv-i such that screening assays fail to distinguish between antibodies to either virus. transfusion-transmitted htlv-i has been demonstrated. 106 tsp/ham has developed in a small percentage of infected transfusion recipients, but no transfusionassociated cases of atl have been seen. approximately 0.025% of donors in the united states are seropositive for htlv-i and htlv-ii 107 ; further testing reveals the majority of them to be htlv-ii. donated blood is currently screened for antibodies to htlv-i and htlv-ii. the estimated risk of htlv transmission by screened negative blood is believed to be 1 in 250,000 to 2 million. cmv is a human herpesvirus that establishes latent infection in the host's tissues, particularly leukocytes, and is transmitted by all cellular blood components. 108 seropositivity, or the presence of antibody, denotes previous exposure to the virus but does not confer protective immunity. secondary reinfection or reactivation of latent infection can occur. antibodies to cmv persist for life and serve as a marker indicating the potential for transmission of live virus. immunocompetent recipients of transfused cmvpositive blood experience minimal morbidity and mortality. the majority are asymptomatic, whereas a heterophile-negative mononucleosis syndrome may develop in a few. immunocompromised patients, however, may suffer life-threatening manifestations such as severe interstitial pneumonitis, gastroenteritis, hepatitis, or disseminated disease. several groups of patients are at particular risk (box 80-1), 109 and these patients should receive blood incapable of transmitting the virus. other patients may benefi t from cmv-negative blood as well, such as seronegative solid organ transplant recipients or autologous bmt patients. screening of donated blood for cmv is not routinely done but can be performed quickly if necessary. because the prevalence of donor seropositivity is quite high in some regions (50% to 70%), cmvseronegative blood may not be readily available. blood that is leukocyte depleted ("cmv safe") may be as effective as seronegative blood in the prevention of cmv transmission, although a recent meta-analysis of clinical trials comparing the two methods suggests that cmvnegative blood products might have a slight advantage over leukocyte-depleted products. 110 many blood-borne parasites may be transmitted by transfusion, although this is a rare occurrence in the united states because of donor screening questions and the low endemicity of implicated agents. 111 changing immigration patterns and worldwide travel, however, make transfusion-transmitted parasites an increasing concern. on a worldwide basis, malaria is the most important transfusion-transmitted infective organism, although only about three cases occur in the united states each year. such infections are manifested by delayed fever, chills, seronegative pregnant women seronegative premature infants weighing less than 1200 g seronegative allogeneic or autologous bone marrow transplant recipients seronegative transplant recipients of seronegative organs diaphoresis, and hemolysis, often masked by underlying medical conditions. fatalities have occurred. babesiosis, a tick-borne disease, is endemic in regions of the united states, especially the northeast, with a seroprevalence of about 4%. transfusion-transmitted cases have been reported, with asplenic or immunocompromised patients being particularly susceptible. with increases in the number of latin american immigrants to the united states, american trypanosomiasis (chagas' disease), which is endemic in latin american countries, has emerged as a potential pathogen. other parasitic diseases that have been transmitted by transfusion include toxoplasmosis, leishmaniasis, and lyme disease. parvovirus b19 has now been recognized as a pathogen capable of transmission by transfusion, with typical clinical fi ndings and the potential for severe hematologic complications. cases of epstein-barr virus infection with a typical mononucleosis-like illness have been reported after transfusion. west nile virus has also been transmitted by transfusion. h2n1 infl uenza, severe acute respiratory syndrome (sars), and other new viral infections should be capable of transmission by transfusion, although cases have not been reported and the prevalence of asymptomatic disease is unknown. a rising area of concern is the transmission of prion disease, either jacob-creutzfeldt disease or bovine spongiform encephalopathy (bse). donor referral criteria were implemented in 1987 for these diseases, and transmission of bse has been reported in the united kingdom. massive transfusion is defi ned as the administration of blood components in excess of one blood volume within a 24-hour period. in an average adult (70 kg), this represents approximately 10 units of wb or equivalent prbcs, crystalloid solution, and other components. massive transfusion, especially in the range of 20 or more units of blood products, causes complications not generally seen in usual transfusion practice: accumulation of undesirable substances present within banked blood and dilutional depletion of normal blood constituents that are lacking in stored units. trauma victims, surgical patients undergoing extensive procedures, and patients with vascular or coagulation disorders may be massively transfused in the critical care setting. survival of the massive transfusion episode is determined more by the nature and degree of the patient's injuries or medical conditions than by the transfusions themselves, but the presence of adverse effects of massive transfusion can complicate patients' courses in the icu. transfusion of large quantities of stored blood defi cient in functional platelets often results in hemostatic defects or outright thrombocytopenia. circulating platelets consistently decrease in inverse proportion to the amount of blood administered, with the hemostatically signifi cant level of 50 ã� 10 9 /l reached after 20 u. 112, 113 functional defects have also been noted, and the bleeding time is prolonged. 114 despite these laboratory changes, severe diffuse bleeding develops in less than 20% of massively transfused patients, and no laboratory studies predict those who will. prophylactic platelet transfusion has not been shown to be of benefi t. 115 platelet counts may return to hemostatically effective levels quickly in patients with normal marrow function. currently, resuscitation of massively bleeding patients is most often accomplished with prbcs in combination with crystalloid solution. this should result in hemodilution to about 60% of normal plasma factor levels after the transfusion of about 10 units; this factor level can effect normal hemostasis. in reality, however, crystalloids may be given in excess of prbcs, so after 10 units is transfused, less plasma protein may remain. bleeding is unlikely until prothrombin time (pt)/inr and ptt prolongations exceed 1.5 to 1.8 times the midpoint normal range, the equivalent of an inr approaching 2.0. 113 as with platelets, prophylactic administration of ffp has not proved effective in preventing diffuse bleeding. 116 thus, the decision to transfuse should be made on an individual basis, as determined by the presence of bleeding or unacceptable risk in patients with documented abnormalities in coagulation. one new area of controversy in the treatment of patients with massive hemorrhage is the use of recombinant activated factor vii. this new agent was created for the treatment of hemophiliac patients with high titers of antibodies to factor viii, which makes them unable to benefi t from transfusion of recombinant factor viii. activated factor vii bypasses that problem by binding to tissue factor and directly activating thrombin and hence generating fi brin. 117 it is extremely expensive, has a short half-life, and carries a risk of inducing pathologic thrombosis, with potentially grave consequences. nevertheless, in numerous case reports, this new agent appears to potentially be benefi cial if used early in the resuscitation of massively injured patients. unfortunately, its unsupervised use has also resulted in thrombotic complications and relative lack of success, both of which suggest that carefully controlled clinical trials are appropriate. 118 blood preservative solutions contain excess citrate, which anticoagulates stored blood by binding ionized calcium. wb contains approximately 1.8 g of citrate/citric acid per unit in the plasma fraction. patients with normal liver function can metabolize the citrate load in 1 unit of wb in 5 minutes, but hepatic impairment may extend removal to 15 minutes or longer. toxicity may result when citrate is administered in excess of the metabolic rate, thereby causing a decrease in ionized calcium levels. 119 although paresthesias, cramps, and myoclonus accompany citrate excess, the chief danger of hypocalcemia is depression of myocardial contractility and potential prolongation of the qt interval. because the effects of citrate are transient and the use of prbcs containing little residual citrated plasma is far more common than massive transfusion with wb, routine administration of calcium is not indicated; clinically signifi cant rebound hypercalcemia may result. calcium infusion should be limited to hypoperfused patients with hepatic or cardiac failure who manifest citrate toxicity, and careful monitoring is essential. as potassium leaks from rbcs during storage, up to 7 meq of extracellular potassium may accumulate in each unit. however, dangerous levels of potassium rarely develop in adults from stored blood; the potassium level is more likely to be determined by the patient's acid-base status. 117 studies of massively transfused patients have demonstrated a wide range of potassium levels, with hypokalemia seen as frequently as hyperkalemia. because of the many physiologic mechanisms altered during resuscitation, including those of the respiratory, renal, cardiac, and hepatic systems, it is impossible to predict the net effect of massive transfusion on serum potassium levels. the ph of banked blood drops during storage, from 7.16 at the time of collection to as low as 6.73 after several weeks of storage. administration of large quantities of acidic blood, together with the metabolic acidosis common in these patients before resuscitation, would lead one to expect worsening acidosis as the outcome of massive transfusion. however, patients are more likely to exhibit metabolic alkalosis at the end of the transfusion episode, 120,121 partly because of improved tissue perfusion and the metabolism of citrate and lactate to bicarbonate. patients in renal failure may be unable to handle the bicarbonate load and require dialysis. acidosis persisting after transfusion suggests inadequate tissue perfusion. 119 empiric administration of bicarbonate to counter the acid load is not warranted and may contribute to the deleterious effects of hypercapnia in patients with impaired ventilation. as discussed previously, the level of rbc-associated 2,3-dpg in banked blood declines during storage, which increases the affi nity of hemoglobin for oxygen and thereby results in decreased oxygen off-loaded to tissues. even in massively transfused patients, it has been diffi cult to document a clinical impact of this shift, and no reliable method for restoring red cell 2,3-dpg has been developed. wb and prbcs are stored at approximately 4⺠c and require 30 to 45 minutes to warm to room temperature. elective transfusions at standard fl ow rates are tolerated without the need to warm the blood; however, core body temperature, measured by esophageal probe, can fall to 30⺠c or lower with the administration of large volumes of cold blood over a period of 1 to 2 hours. 122 adverse effects of hypothermia include a decreased heart rate and myocardial contractility, cardiac arrhythmias, increased affi nity of hemoglobin for oxygen resulting in decreased tissue oxygen delivery, dic, and impaired ability to metabolize the citrate load of stored blood. both blood warmers and patient warming may be instituted during massive transfusion, and patient core temperature should be monitored during such resuscitative efforts. whether massive transfusion in and of itself is a cause of ards is another source of controversy. there are certainly theoretical reasons why massive transfusion might precipitate ards: all cellular transfusions contain damaged or activated wbcs, cell membranes, aggregated platelets, and microthrombi, all of which are capable of lodging in and damaging pulmonary capillaries. despite this possibility, neither microfi ltration of transfusions nor routine leukocyte depletion has shown a signifi cant impact on the incidence of ards in massively transfused patients. 123 certainly, other causes of ards exist in patients who undergo massive transfusion, and the possibility of volume overload and trali should be considered in the evaluation of patients with hypoxia and diffuse pulmonary infi ltrates after massive transfusion. management of such patients is supportive, consistent with the overall management of massive transfusion. 124, 125 autoimmune hemolytic anemia patients with autoimmune hemolytic anemia (aiha) have an autoantibody, usually of broad specifi city, that fi xes itself to their rbcs and triggers extravascular immune-mediated destruction. patients with aiha have a positive direct antiglobulin test 126 (dat, commonly known as the coombs test) and varying degrees of he molysis, and their autoantibodies cause agglutination of rbcs from all donors during crossmatching. if the hemolysis is brisk, patients may require red cell transfusion to support oxygen needs before medical management of the aiha is effective. hence, transfusion is diffi cult because agglutination during crossmatching interferes with proper defi nition of compatible units of rbcs and because the transfused rbcs are themselves subject to the same immune hemolysis as the host rbcs. many blood banks have methods for depletion of autoantibodies from the recipient's plasma and elution of antibodies from rbcs to arrive at a proper crossmatch. 127 although such crossmatches are time consuming and not generally available on an emergency basis, they can be lifesaving. criteria for transfusion should remain the same as for other recipients. rbcs are crossmatched for red cell antigens in the abo and rh 0 (d) group and for other red cell antigens when antibodies are present. however, there are several hundred other red cell antigens in the human family, and with repeated transfusion recipients may become alloimmunized to other antigens. generally, alloimmunization occurs in approximately 1% of transfusions, 68 but the prevalence of alloantibodies is higher in chronically transfused, relatively immunocompetent patients, especially african americans, whose distribution of red cell antigens has signifi cant variation from the white population. alloimmunization rates of 30% or higher may be found in chronically transfused patients with hemoglobinopathies who have not received rbcs matched to potent minor antigens such as kell, duffy, and lewis. alloimmunization may present diffi culties in crossmatching of blood, to the point that compatible blood must be obtained from raredonor registries, if at all. other patients present unresolved serologic problems in that the alloantibody is never precisely identifi ed yet the majority of blood available for transfusion is incompatible. the delay engendered by working with multiple or unidentifi ed antibodies may be unacceptable in some critical care situations in which the need for oxygen-carrying capacity leaves no choice but to transfuse incompatible blood. the behavior of these antibodies in the laboratory may assist in predicting the clinical outcome of the incompatible transfusion. 128 special procedures such as clearance studies, 129 fl ow cytometry 130 and in vivo crossmatching (cautious administration of a small aliquot of blood, with subsequent observation of serum and urine for evidence of hemolysis) are useful if time permits. emergency transfusion of type o, rh-negative uncrossmatched blood is generally reserved for the resuscitation of trauma patients, for whom the delay in crossmatching may be life-threatening. the risks of alloimmunization are generally accepted as low. even rh-positive type o rbcs may be used because rates of alloimmunization to rh 0 (d) are low under the circumstances of emergency transfusion. 128, 131 dic can present the clinician with diffi cult therapeutic choices. this common disorder in critically ill patients may be manifested as severe hemorrhage or thrombosis. therapy is primarily directed at alleviating the cause and supporting the patient. supportive therapy includes the transfusion of components needed to correct the bleeding diathesis caused by the consumption of platelets and fi brinogen, in addition to prbcs to restore oxygencarrying capacity. platelets and fi brinogen (as cryoprecipitate) are the most useful components needed to repair the coagulopathy, but their use risks merely "fueling the fi re" and increasing the microthrombosis of dic. heparin anticoagulation is controversial 132, 133 and may increase the risk of bleeding, especially if depleted factors are not replenished. no defi nitive clinical trials have endorsed the routine use of heparin, and randomized trials of other components and coagulation inhibitors have uniformly been negative. in general, the use of heparin and antifi brinolytic agents has been confi ned to the most severe and protracted cases of dic. 134 cirrhotic patients or those with fulminant hepatic failure have a variety of hemostatic disorders that complicate transfusion management of a bleeding patient. 135 hepatic synthesis of coagulation factors may be markedly diminished, thereby necessitating replacement by ffp or cryoprecipitate. patterns of factor diminution may vary between acute hepatic necrosis and chronic cirrhosis. 136 associated hemodynamic alterations may make it impossible to administer the volumes required for effective hemostasis, however, and any effect is transient. the use of factor concentrates or antifi brinolytic agents may precipitate thrombosis. activation of fi brinolysis and decreased clearance of activated factors may produce or mimic chronic dic, thus further exacerbating the factor defi ciencies and impairing coagulation. abnormal platelet function and thrombocytopenia may contribute to the coagulopathy of liver disease, with concomitant splenomegaly reducing the effectiveness of platelet transfusions. bleeding in uremic patients is exacerbated by an acquired platelet defect, in part secondary to dialyzable circulating molecules soluble in platelet membranes. plateletassociated vwf and plasma high-molecular-weight vwf multimers have also been shown to be decreased, 137 which may explain the benefi t shown by ddavp 138 and cryoprecipitate in shortening the bleeding time and improving hemostasis in some uremic patients. raising the hct by red cell transfusion in anemic patients has also been shown to shorten the bleeding time, presumably as a result of blood vessel wall-laminar blood fl ow interaction. transfusion of platelets in the absence of thrombocytopenia is unlikely to be of benefi t because the transfused platelets rapidly become dysfunctional. more aggressive hemodialysis is the most widely accepted method of reducing platelet dysfunction. bmt patients are vulnerable to the severe infectious and toxic side effects of ablative treatment and hence may be cared for in critical care units. these patients may have intensive red cell and platelet transfusion requirements and need specialized products such as cmv-negative and irradiated blood components. a blood bank problem uniquely encountered in bmt is the need to switch the patient's abo group because of an abo-mismatched transplant, thus necessitating an exchange transfusion of red cells and plasma-containing products (i.e., platelet concentrates) of differing abo type to avoid hemolysis of donor and recipient cells. bmt patients may also manifest an increased rate of delayed hemolytic reactions 139 as donor "passenger" lymphocytes recognize recipient or transfused red cell antigens. patients should be monitored particularly closely between days 10 and 20 after a minormismatched allogeneic transplant, and aggressive transfusion should be undertaken if the hemoglobin level falls and the dat result becomes positive. the safest transfusion is one that is not given. therefore, alternatives to blood component therapy continue to be sought and are valuable adjuncts in some instances. it is possible to limit homologous blood exposure by the appropriate use of pharmacologic agents that promote hemostasis and the administration of recombinant hematopoietic growth factors or biologic growth modifi ers to stimulate marrow hematopoiesis. only one substitute for rbc transfusions has been approved in the united states, a polyfl uorocarbon oxygen carrier with signifi cant limitations as a blood substitute. 140 other preparations that have been explored in clinical trials are cell-free hemoglobin solutions cross-linked or polymerized by chemical manipulation to prevent rapid clearance from the circulation. they are intended to provide short-term oxygen-carrying capacity for acutely ill patients and have the advantage of not requiring crossmatching or infection control. although these proposed products may have a longer shelf-life and are easier to transport, their drawbacks are many. most have a circulatory half-life of only about 24 hours. the oxygen dissociation curve for these substitutes is also frequently not favorable: either a high fio 2 is required to "load" these molecules or they are less likely to deliver oxygen efficiently at lower po 2 levels. 141 because the hemoglobin source is reclaimed bovine or human red cells, it is unlikely that patients who do not accept blood components because of their religious beliefs (jehovah's witnesses) will accept these types of hemoglobin solutions. one product in development uses recombinant technology to generate hemoglobin, and it is hoped that this solution may be acceptable to these patients. the licensed perfl uorocarbon solutions have failed to demonstrate any utility as intravascular oxygen carriers because of their unfavorable p-50 (oxygen half-saturation pressure) and oxygen off-loading characteristics. they are fi nding limited application in regional oxygenation during angioplasty or stent placement procedures and a more novel use in "liquid ventilation." this involves the ventilation of intubated patients experiencing severe pulmonary compromise with superoxygenated perfl uorocarbon solutions in place of oxygen-enriched air. 142 the synthetic vasopressin analogue ddavp increases plasma factor viii : c and promotes the release of vwf from endothelial stores. 143 ddavp has provided effective hemostasis in bleeding patients with mild hemophilia a and type i von willebrand's disease and has been used as prophylaxis for patients undergoing surgery. ddavp reportedly improves platelet function in some patients with qualitative platelet disorders associated with uremia, 136 cirrhosis, and aspirin ingestion. studies of its effi cacy in cardiopulmonary bypass procedures are confl icting, but a subset of these patients may benefi t. the chief drawback to its use is tachyphylaxis, which develops in essentially all cases after short-term repeated administration. the lysine analogues îµ-aminocaproic acid and tranexamic acid inhibit fi brinolysis by blocking the binding of plasminogen and plasmin to fi brin. these antifi brinolytic agents may decrease bleeding and thus the need for homologous blood components in patients with hemophilia, thrombocytopenia, and systemic fi brinolysis. a novel and effective use of tranexamic acid involves administration as a mouthwash in preparation for oral surgery in patients with hemophilia or those receiving oral anticoagulant therapy. 144 the most serious side effect of these agents when systemically administered is thrombosis; thus, it is important to use them appropriately and monitor the patient carefully during their use. aprotinin is a naturally occurring bovine serine protease inhibitor that acts on plasma serine proteases such as plasmin, kallikrein, trypsin, and some coagulation proteins. aprotinin has been shown to reduce blood loss in patients undergoing cardiopulmonary bypass surgery 145 by inhibiting fi brinolysis and preventing platelet damage. however, more recent reports of renal injury and longterm mortality may mean an end to its use. 146 aprotinin has been used extensively in liver transplantation, which involves high blood loss. repeated administration poses the risk of anaphylaxis and renal dysfunction. when time permits, vitamin k is the preferred agent to reverse the coagulopathy induced by oral anticoagulants. normalization of the pt can be seen in as few as 6 to 12 hours. additionally, selected cirrhotic patients may exhibit improvement in the pt when treated with therapeutic doses of vitamin k. many patients in critical care units exhibit a prolonged pt, especially if dietary supplements are limited and broad-spectrum antibiotic therapy is given. vitamin k is a safe and effective agent for reversing this effect. recombinant erythropoietin (epo) has dramatically reduced the red cell transfusion requirements of patients in chronic renal failure. epo also has applications in the adjunctive treatment of the anemia of premature infants and the anemia of chronic disease, especially rheumatoid arthritis, cancer, and aids. studies of its effi cacy in reducing perioperative red cell transfusion requirements by increasing the yield of predeposited autologous blood or stimulating bone marrow synthesis after surgery have shown benefi t in reducing blood transfusion, although preoperative planning and autologous deposits are required. 147 in contrast and probably because the impact of epo is not immediate, the effi cacy of epo in the icu is unproven and awaits the results of large clinical trials. recombinant growth factors such as granulocytemacrophage colony-stimulating factor (gm-csf) and g-csf stimulate marrow production of leukocytes by enhancing several different granulocyte and macrophage functions. these agents are fi nding application in reducing the neutropenic period in bmt and cancer chemotherapy by increasing the leukocyte count in hypoproliferative marrow conditions. these myeloid growth factors are replacing granulocyte transfusions for their few remaining indications. cell salvage equipment has been in clinical use for several decades, and although cell salvage is clearly capable of rescuing otherwise "lost" red cells, its full impact on transfusions has been poorly documented. cell salvage generally consists of collection of shed blood from a clean, uncontaminated operating fi eld, followed by removal of the cellular elements and retransfusion into the patient. cell salvage has been used both intraoperatively and postoperatively, especially in cardiac surgery. although the clinical studies of cell salvage have many fl aws, the overall success of this therapy in reducing transfusion has resulted in its wide application. 148 risks include bacterial contamination, febrile reactions, triggering of dic, and coagulopathy as a result of dilution. when combined with acute intraoperative hemodilution, this technology is also potentially cost saving. 149 the word apheresis is derived from the greek aphairein, "to take away"; thus, therapeutic hemapheresis is performed to remove unwanted plasma constituents (plasmapheresis) or blood cells (cytapheresis). automated cell separators use centrifugation or membrane fi ltration to remove and concentrate the selected blood element. many of the same devices used to prepare apheresis blood components for transfusion are used to perform patient procedures, so therapeutic apheresis is often administered under the auspices of the transfusion medicine service. rapid removal of plasma or cells may fi nd several applications in intensive care practice (box 80-2). the goal of plasmapheresis, or plasma exchange (pe), is to remove or reduce the levels of an undesirable plasma constituent or, alternatively, by means of plasma replacement, to supply a missing substance. the agent to be removed by pe is thought to be an autoantibody in some of the neurologic, renal, or hematologic conditions treated in this manner. 150 immunomodulation by pe is another explanation for its effect, a theory indirectly supported by the equivalent effi cacy of ivig therapy for several of these disorders. 151 pe for the amelioration of hyperviscosity from either excess igm in waldenstrã¶m's macroglobulinemia or excess ig in multiple myeloma is an effective temporizing measure in the treatment of these conditions. 152 plasmapheresis with pe is the standard therapy for ttp. 153 unfortunately, few controlled trials of pe exist, although anecdotal reports abound. pe is seldom the defi nitive treatment of most of these conditions and is used most appropriately as a short-term adjunct to other medical modalities. the kinetics of pe predicts that a one-volume exchange removes 65% of a given plasma constituent if the blood volume does not change or additional synthesis or mobilization of the substance does not occur. two or three volume exchanges remove 87% and 95%, respectively. highly protein-bound, intravascularly concentrated substances are most effi ciently removed, whereas substances with a large volume of distribution such as igg, active synthesis, or large extravascular stores are removed at less than predicted rates. the usual short-term intense course of pe schedules fi ve one-volume exchanges (approximately 3 l in normal-sized adults) over a 7-day period. the appropriate replacement fl uid in most conditions is an albumin-saline mixture, which provides oncotic support without the risk of disease transmission borne by ffp. pe in patients with ttp uses replacement with ffp to supply the plasma protease that is consumed during the disease. side effects of pe are relatively common (10% to 30% of procedures) but generally minor and are related to vascular access, temporary discomfort, or vasomotor symptoms. 154 patient death is rarely due to the procedure itself but is largely of cardiopulmonary causes. plasma proteins such as coagulation factors, immunoglobulins, and complement will be removed by pe, and laboratory test results of coagulation and electrolytes may be deranged in the hours after pe. clinical bleeding is rarely observed. most coagulation factors do not fall below hemostatic levels and recover within hours, with the exception of fi brinogen, which may require several days for complete replenishment. leukapheresis may be required to urgently reduce the wbc count in patients with acute myeloid or lymphoblastic leukemia or chronic myelogenous leukemia with peripheral counts of 100 ã� 10 9 /l or greater. each procedure is expected to drop the count by a third, but the effect is short lived. leukapheresis should be reserved for use only as an adjunct to chemotherapy in patients with pulmonary or cerebral leukostasis or for cytoreduction before chemotherapy in patients at risk for severe tumor lysis syndrome. plateletpheresis may be benefi cial as short-term therapy in patients with symptomatic thrombocythemia manifested as cerebral or myocardial ischemia, pulmonary emboli, or gastrointestinal bleeding. each procedure should effect a 50% reduction in the platelet count. cytotoxic therapy should be started concomitantly as the defi nitive treatment. litigation related to blood transfusion has become prominent, particularly after the epidemic of transfusionassociated aids. 155 most states regulate blood banking and medical practice, but blood products are regarded as symptomatic hyperviscosity thrombotic thrombocytopenic purpura neurologic diseases: myasthenia gravis, guillain-barrã© syndrome uncontrolled systemic vasculitis with critical end-organ injury symptomatic leukocytosis symptomatic thrombocythemia sickle cell anemia crisis (pulmonary or central nervous system manifestations) a service, not as a commodity, so standard product liability does not pertain to blood components. 156 however, negligence in the course of preparing, testing, transferring, crossmatching, or administering blood products is still a potential cause for legal action. every clinician who orders transfusions must be aware that blood components, like drugs, are approved for specifi c uses and that the indications should be clearly documented in the medical record. the informed consent of the patient is an important area of potential liability. the joint commission on accreditation of healthcare organizations (jcaho) has required written patient consent for blood transfusions since 1996. what constitutes adequate informed consent and who is responsible for advising the patient are still in contention. elements of informed consent include an understanding of the need for transfusion, its risks and benefi ts, and the alternatives, including the risk of not undergoing transfusion, as well as the opportunity to ask questions. whether the clinician documents informed consent with an individual progress note in the patient record or with a standardized form is generally established as institutional policy. similarly, institutions vary with respect to policies for consenting adults who are temporarily incompetent, such as sedated patients in the icu. a competent adult patient may refuse blood transfusion, and jehovah's witnesses commonly do so for religious reasons. case law is clear in upholding this right of the patient, 157 which extends to care given at such time as the patient may become incompetent (i.e., comatose) after such refusal was expressed before becoming incompetent. courts will usually order a lifesaving transfusion for minors. exceptions have been made in the case of some "emancipated minors" who are at the age of reason. most states have evoked a "special interest" in the welfare of a fetus in ordering transfusions to pregnant women. the advent of sentinel event reviews and other quality management procedures for patient safety has had an impact on transfusion practice as well. procedures for patient identifi cation before surgical procedures, including devices such as bar code readers, have also been applied to transfusion practice. however, annual sentinel event reviews reporting transfusion errors have remained constant according to jcaho records. 158 â�  blood components should be prescribed like drugs. appropriate blood component therapy requires that the specifi c blood product needed for a clear indication be prescribed, with avoidance of a formulaic approach. â�  red blood cells should be transfused only to increase oxygen-carrying capacity. transfusion decisions should be based on individual patient physiology. the majority of patients with hemoglobin levels greater than 60 or 70 g/l will not require transfusion unless they have limited cardiopulmonary reserve or active bleeding. â�  platelet transfusions are indicated for patients who are bleeding because of thrombocytopenia or functional platelet defects. guidelines for platelet transfusion are also conservative. prophylactic platelet transfusion remains controversial and is not warranted in many situations. â�  fresh frozen plasma is indicated for the repletion of coagulation factors in bleeding patients defi cient in those factors or to provide specifi c plasma proteins that cannot be obtained from safer sources. â�  cryoprecipitate is a concentrated source of fi brinogen and selected coagulation factors. cryoprecipitate may be more helpful in correcting the hypofi brinogenemia of dilutional or consumptive coagulopathy than fresh frozen plasma. â�  adverse reactions to blood components occur in 1% to 2% of transfusion episodes. adherence to routine protocols for the evaluation of transfusion reactions may save lives. â�  acute hemolytic reactions are the leading cause of immediate transfusion fatalities. prevention of these reactions requires strict adherence to transfusion and patient identifi cation procedures. â�  transmission of infectious agents by transfusion has been markedly reduced, and bacterial infection is now the most common infectious complication of transfusion. â�  adverse effects unique to massive transfusion are likely to occur in the icu and complicate the management of critically ill or severely injured patients. component therapy for such patients should remain conservative. the emerging role of activated factor vii in the treatment of these patients requires further evaluation. â�  informed consent for blood transfusion is a standard of practice. a competent adult has the legal right to refuse blood transfusion. consent in critically ill patients remains subject to individual institution policies. department of health and human services, food and drug administration: the code of federal regulations, 21 cfr parts 600, 606, 640 standards for blood banks and transfusion services markers for transfusiontransmitted disease in different groups of blood donors comparative safety of units donated by autologous, designated and allogeneic (homologous) donors directed blood donations: con goldfi nger d: directed blood donations: pro shelf-life of bank blood and stored plasma with special reference to coagulation factors generation of cytokines in red cell concentrates during storage is prevented by prestorage white cell reduction universal wbc reduction: the case for and against chemical and hematological changes in stored cpda-1 blood restoration in vitro of erythrocyte adenosine triphosphate, 2,3-diphosphoglycerate, potassium ion, and sodium ion concentrations following the transfusion of acid-citrate-dextrose stored human red blood cells comparison of the hemostatic effects of fresh whole blood, stored whole blood, and components after open heart surgery in children a practice guideline and decision aide for blood transfusion rbc transfusion in the icu: is there a reason? descriptive analysis of critical care units in the united states: patient characteristics and intensive care unit utilization oxygen transport in man physiologic aspects of anemia oxygen extraction ratio: a valid indicator of myocardial metabolism in anemia human cardiovascular and metabolic response to acute, severe isovolemic anemia transfusion guidelines for cardiovascular surgery: lessons learned from operations in jehovah's witnesses physiologic effects of acute anemia: implications for a reduced transfusion trigger a multicenter, randomized, controlled clinical trial of transfusion requirements in critical care is a low transfusion threshold safe in critically ill patients with cardiovascular diseases? oxygen extraction ratio: a valid indicator of transfusion need in limited coronary vascular reserve? for the abc investigators: anemia and blood transfusion in critically ill patients the crit study: anemia and blood transfusion in the critically ill-current clinical practice in the united states red cell transfusion practice following the transfusion requirements in critical care (tricc) study: prospective observational cohort study in a large uk intensive care unit appropriateness of red blood cell transfusion in australasian intensive care practice silent myocardial ischaemia and haemoglobin concentration: a randomized controlled trial of transfusion strategy in lower limb arthroplasty mathematical analysis of isovolemic hemodilution indicates that it can decrease the need for allogeneic blood transfusion guidelines for perioperative red blood cell transfusions american society of anesthesiologists task force: practice guidelines for blood component therapy prudent strategies for elective red blood cell transfusion platelet transfusion therapy. onehour posttransfusion increments are valuable in predicting the need for hla-matched preparations volunteer donor apheresis national institutes of health consensus conference: platelet transfusion therapy the bleeding time as a screening test for evaluation of platelet function changes in blood coagulation during and following cardiopulmonary bypass: lack of correlation with clinical bleeding gamma globulin for idiopathic thrombocytopenic purpura intravenous anti-d treatment of immune thrombocytopenic purpura: experience in 272 patients hazard of platelet transfusion in thrombotic thrombocytopenic purpura mantel n: the quantitative relation between platelet count and hemorrhage in patients with acute leukemia controversies in platelet transfusion therapy safety of stringent prophylactic platelet transfusion policy for patients with acute leukemia the natural history of alloimmunization to platelets clinical factors infl uencing the effi cacy of pooled platelet transfusions optimizing platelet transfusion therapy current status of solvent/detergenttreated frozen plasma update on pathogen reduction technology for therapeutic plasma: an overview national institutes of health consensus conference: fresh frozen plasma: indications and risks the role of prophylactic fresh frozen plasma in decreasing blood loss and correcting coagulopathy in cardiac surgery: a systematic review hemostasis testing during massive blood replacement: a study of 172 cases should plasma be transfused prophylactically before invasive procedures? screening for the risk for bleeding or thrombosis lack of increased bleeding after liver biopsy in patients with mild hemostatic abnormalities why is fresh-frozen plasma transfused? effect of plasma transfusions on the prothrombin time and clotting factors in liver disease clotting factor levels and the risk of diffuse microvascular bleeding in the massively transfused patient fresh frozen plasma and platelet transfusion for nonbleeding patients in the intensive care unit: benefi t or harm? treatment of the bleeding tendency in uremia with cryoprecipitate desmopressin: a nontransfusional form of treatment for congenital and acquired bleeding disorders clinical uses of intravenous immunoglobulin american college of obstetricians and gynecologists: prevention of d isoimmunization, technical bulletin no 147 human albumin solution for resuscitation and volume expansion in critically ill patients hypotension associated with prekallikrein activator (hageman-factor fragments) in plasma protein fraction granulocyte transfusions for treating infections in patients with neutropenia or neutrophil dysfunction special report: transfusion risks transfusion errors: scope of the problem, consequences, and solutions transfusion errors in new york state: an analysis of 10 years' experience jcaho: blood transfusion errors: preventing future occurrences: available at hemolytic transfusion reaction transfusion reactions associated with anti-iga antibodies: report of four cases and review of the literature febrile transfusion reaction: what blood component should be given next? clinical outcomes following institution of the canadian universal leukoreduction program for red blood cell transfusions delayed hemolytic transfusion reaction: an immunologic hazard of blood transfusion transfusion-related acute lung injury and pulmonary edema in critically ill patients: a retrospective study for the nhlbi working group on trali: transfusion-related acute lung injury: defi nition and review transfusion-associated acute lung injury (trali): clinical presentation, treatment and prognosis transfusion-related acute lung injury caused by two donors with antihuman leucocyte antigen class ii antibodies: a look-back investigation for the trali consensus panel: proceedings of a consensus conference: towards an understanding of trali graft-versushost disease: new directions for a persistent problem survey of transfusion-associated graft-versus-host disease in immunocompetent recipients the effect of prestorage irradiation on post-transfusion red cell survival improvement of kidney-graft survival with increased numbers of blood transfusion blood transfusion-modulated tumor recurrence: first results of a randomized study of autologous versus allogeneic blood transfusion in colorectal cancer surgery transfusion-associated cancer recurrence and postoperative infection: meta-analysis of randomized, controlled clinical trials transfusion practice and nosocomial infection: assessing the evidence transfusion increases the risk of postoperative infection after cardiovascular surgery immunosuppressive effects of blood transfusion transfusion of leukoreduced red blood cells may decrease postoperative infections: two meta-analyses of randomized controlled trials transfusion immunomodulation or trim: what does it mean clinically? risks of blood transfusion transfusiontransmitted cytomegalovirus and epstein-barr virus diseases current status of microbial contamination of blood components: summary of a conference septic reactions to platelet transfusions: a persistent problem red blood cell transfusions contaminated with yersinia enterocolitica-united states, 1991-1997, and initiation of a national study to detect bacteria-associated transfusion reactions routes of infection, viremia, and liver disease in blood donors found to have hepatitis c infection clinical outcomes after transfusionassociated hepatitis c adverse consequences of blood transfusion: quantitative risk estimates stramer sl: current prevalence and incidence of infectious disease markers and estimated window-period risk in the american red cross blood donor population possible transfusionassociated acquired immune defi ciency syndrome (aids): california impact of explicit questions about high-risk activities on donor attitudes and donor referral patterns. results in two community blood centers the effectiveness of the confi dential unit exclusion option human immunodefi ciency virus type 2 infection in the united states: epidemiology, diagnosis, and public health implications update: hiv-2 infection among blood and plasma donors-united states transmission of human tlymphotropic virus types i and ii by blood transfusion a prospective study of transmission by transfusion of htlv-i and risk factors associated with seroconversion post-transfusion cytomegalovirus infections reducing the risk for transfusion-transmitted cytomegalovirus infection is white blood cell reduction equivalent to antibody screening in preventing transmission of cytomegalovirus by transfusion? a review of the literature and meta-analysis transmission of parasitic infections by blood transfusion hemostasis in massively transfused trauma patients laboratory hemostatic abnormalities in massively transfused patients given red blood cells and crystalloid serial changes in primary hemostasis after massive transfusion prophylactic platelet administration during massive transfusion clotting factor levels and the risk of diffuse rnicrovascular bleeding in the massively transfused patient potential role of recombinant factor viia as a hemostatic agent recombinant factor viia: unregulated continuous use in patients with bleeding and coagulopathy dues not alter mortality and outcome massive blood replacement: correlation of ionized calcium, citrate, and hydrogen ion concentration potassium levels, acid-base balance and massive blood replacement acid-base status of seriously wounded combat casualties: resuscitation with stored blood blood temperature: a critical factor in massive transfusion an in vivo evaluation of microaggregate blood fi ltration during total hip replacement massive transfusion as a risk factor for acute lung injury: association or causation? guidelines on the management of massive blood loss autoimmune hemolytic anemia approaches to selecting blood for transfusion to patients with autoimmune hemolytic anemia the clinical implications of platelet transfusions associated with abo or rh(d) incompatibility survival curves of incompatible red cells: an analytical review isotype-specifi c detection of abo blood group antibodies using a novel fl ow cytometric method use of rh positive blood in emergency situations pharmacologic agents in the management of bleeding disorders disseminated intravascular coagulation. approach to treatment the pathogenesis and management of disseminated intravascular coagulation coagulation disorders in liver disease new insights into haemostasis in liver failure plasma and platelet von willebrand factor defects in uremia deamino-8-d-arginine vasopressin shortens the bleeding time in uremia donor-derived red blood cell antibodies and immune hemolysis after allogeneic bone marrow transplantation fluosol-da as a red-cell substitute in acute anemia the prospect for red cell substitutes low-dose perfl uorocarbon: a revival for partial liquid ventilation? response of factor viii/von willebrand factor to ddavp in healthy subjects and patients with haemophilia a and von willebrand's disease management of oral bleeding in haemophiliac patients amelioration of the bleeding tendency of preoperative aspirin after aortocoronary bypass grafting for investigators of the multicenter study of perioperative ischemia research group: mortality associated with aprotinin during 5 years following coronary artery bypass graft surgery does the use of erythropoietin reduce the risk of exposure to allogeneic blood transfusion in cardiac surgery? a systematic review and meta-analysis cell salvage for minimizing perioperative allogeneic blood transfusion cost-effectiveness of cell salvage and alternative methods of minimizing perioperative allogeneic blood transfusion: a systematic review and economic model plasmapheresis in nephrology: an update national institutes of health consensus conference: the utility of therapeutic plasmapheresis for neurological disorders correction of hyperviscosity by apheresis improved survival in thrombotic thrombocytopenic purpura-hemolytic uremic syndrome therapeutic plasma exchange as a nephrological procedure: a singlecenter experience a review of transfusion-associated aids litigation: 1984 through 1993 legal, fi nancial, and public health consequences of hiv contamination of blood and blood products in the 1980s and 1990s legal aspects of transfusion of jehovah's witnesses joint commission on accreditation of hospitals and healthcare organizations: sentinel event statistics: available at key: cord-033328-ny011lj3 authors: vese, donato title: managing the pandemic: the italian strategy for fighting covid-19 and the challenge of sharing administrative powers date: 2020-09-03 journal: nan doi: 10.1017/err.2020.82 sha: doc_id: 33328 cord_uid: ny011lj3 this article analyses the administrative measures and, more specifically, the administrative strategy implemented in the immediacy of the emergency by the italian government in order to determine whether it was effective in managing the covid-19 pandemic throughout the country. in analysing the administrative strategy, the article emphasises the role that the current system of constitutional separation of powers plays in emergency management and how this system can impact health risk assessment. an explanation of the risk management system in italian and european union (eu) law is provided and the following key legal issues are addressed: (1) the notion and features of emergency risk regulation from a pandemic perspective, distinguishing between risk and emergency; (2) the potential and limits of the precautionary principle in eu law; and (3) the italian constitutional scenario with respect to the main provisions regulating central government, regional and local powers. specifically, this article argues that the administrative strategy for effectively implementing emergency risk regulation based on an adequate and correct risk assessment requires “power sharing” across the different levels of government with the participation of all of the institutional actors involved in the decision-making process: government, regions and local authorities. “and the flames of the tripods expired. and darkness and decay and the red death held illimitable dominion over all”. edgar allan poe, the mask of the red death, complete tales and poems (new york, vintage books 1975) p 273 international concern" (pheic). 2 in the light of its later levels of spread and severity worldwide, the who then assessed covid-19 as a "pandemic". 3 the pandemic has spread rapidly in several european union (eu) member states. italy, however, is a special case: here, the covid-19 outbreak spiralled upwards earlier and more severely than elsewhere in europe, reaching a high mortality rate and creating the conditions for the public healthcare system's collapse. in this scenario, the italian government (from now on the government) declared a nationwide state of emergency, 4 followed by increasingly restrictive measures aimed at slowing and containing the spread of the virus and mitigating the pandemic's effects under the by now well-known "flatten the curve" imperative. the last of these measures 5 established the national lockdown, extending the emergency rules to the entire country for six months 6 and, more generally, providing what has been called the "italian model to fight covid-19", namely "diminish viral contagions through quarantine; increase the capacity of medical facilities; and adopt social and financial recovery packages to address the pandemic-induced economic crisis". 7 in this article, starting from the main regulatory acts and considering recent scientific knowledge and epidemiological data on covid-19, we will examine the administrative measures the government has taken and the strategy it has implemented to deal with the pandemic in the immediacy of the emergency. after this initial analysis, we might legitimately wonder whether those measures and that strategy have proven effective in containing the pandemic. more generally, by analysing the administrative strategy, the article emphasises the role that the current system of constitutional separation of powers plays in emergency management and how this system can impact health risk assessment. an explanation of the risk-management system in italian and eu law will be provided and the following key legal issues will be analysed: (1) the notion and features of emergency risk regulation from a pandemic perspective, distinguishing between risk and emergency; (2) the potential and limits of the precautionary principle in eu law; 2 who, "statement on the second meeting of the international health regulations (2005) emergency committee regarding the outbreak of novel coronavirus (2019-ncov)", geneva, switzerland, 30 january 2020 . pheic has been defined in the international health regulations (ihr) of 2005 as an extraordinary event which can: (1) constitute a public health risk to other states through the international spread of disease; and (2) potentially require a coordinated international response. furthermore, this definition implies a situation that is: (1) serious, unusual or unexpected; (2) carries implications for public health beyond the affected state's national borders; and (3) and may require immediate international action. 3 who, "director-general's opening remarks at the media briefing on covid-19", 11 march 2020 . 4 resolution of the council of ministers of 31 january 2020, adopted pursuant to legislative decree 1/2018 (civil protection code) . on the declaration of emergency rule, see european commission for democracy through law (venice commission) . 5 dpcm of 9 march 2020 . 6 for the general framework of all measures adopted by the italian state during the covid-19 emergency, see . 7 fg nicola, "exporting the italian model to fight covid-19" (the regulatory review, 23 april 2020) . and (3) the italian constitutional scenario with respect to the main provisions regulating central government, regional and local powers. specifically, the article argues that the administrative strategy for effectively implementing emergency risk regulation based on an adequate and correct risk assessment requires "power sharing" across the different levels of government with the participation of all of the institutional actors involved in the decision-making process: government, regions and local authorities. following the declaration of the state of emergency, the government approved decree-law no. 6 of 23 february 2020 vesting the president of the council of ministers with wide ordinance powers to handle the emergency by issuing his own administrative decrees. 8 in particular, decree-law 6/2020 gave the prime minister the power to issue typical emergency administrative measures in order to ensure social distancing, impose lockdown areas, close offices and public services and suspend economic activities. in addition, it allowed him to adopt atypical administrative powers whereby "further containment and emergency management measures" could be established. 9 in a matter of days, the government approved three important regulatory acts based on the implementation of decree-law 6/2020: 10 first with the decree of the president of the council of ministers (dpcm) of 8 march 2020, 11 second with the dpcm of 9 march 2020 12 and third with the dpcm of 11 march, 13 the government established stringent emergency administrative measures to curb the pandemic's spread throughout the country. 14 in the first instance, these measures were gradual and concerned specific municipalities, provinces or regionsespecially in northern italythat were hardest hit by the virus and therefore classified as "red zones" subject to government-imposed local lockdowns. later on, the government established the national lockdown, and emergency measures were extended to the entire country for six months. in particular, pursuant to article 1(1) of the dpcm of 8 march 2020, the government imposed a lockdown in lombardy and another fourteen provinces of northern italy. in doing so, the government introduced several legal prohibitions, such as the ban on people travelling to and from places in the red zones. with the subsequent national lockdown, the government imposed a travel ban in the entire country according to article 1(1), dpcm of 9 march 2020, and prevented all forms of social gathering in public places or places open to the public across the country, according to article 1(2), dpcm of 9 march 2020. furthermore, pursuant to articles 1(1), 1(2) and 1(3), dpcm of 11 march 2020, retail businesses and personal services were suspended. 15 as a consequence of the national lockdown, the ministry of health's order of 20 march 2020 provided several stringent measures that prohibited many activities, such as the ban on accessing all public places, on exercising in public places and on going to holiday homes. 16 in addition, with its order of 28 march 2020, the ministry of health, in agreement with the ministry of transport, established that people entering italy by plane, boat, rail or road must declare their reason for travel, the address where they plan to self-isolate, how they intend to travel there and their phone number so that authorities can contact them throughout an obligatory fourteen-day quarantine. 17 moreover, several administrative sanctions were gradually established in the various regulatory acts. the last of these acts introduced rigorous sanctions for people who leave home without valid reasons and for undertakings that do not comply with the order to close. 18 in the meantime, the regions and local authorities also adopted several ordinances establishing emergency administrative measures for the pandemic in their area. 19 lastly, the government issued decree-law no. 19 of 25 march 2020, with the aim of rationalising and coordinating emergency powers among the different levels of government. 20 *** in the following pages, emphasising the role that the current structure of constitutional separation of powers plays in risk assessment, i will argue that the main problems of the italian administrative strategy for the covid-19 pandemic are due to the lack of effective "sharing of powers", and more specifically to the failure to share administrative 15 20 in particular, art 2(3) of decree-law 19/2020 did not affect the effects produced and acts adopted on the basis of decrees and ordinances issued pursuant to decree-law 6/2020 or art 32 of law 833/1978, and established that the measures previously adopted by the dpcms of 8 march 2020, 9 march 2020, 11 march 2020 and 22 march 2020 as still in force on the date of entry into force of the said decree-law shall continue to apply within the original terms. regulatory powers among the different levels of government with the participation and cooperation of all institutional actors involved in the emergency decision-making process: the government, regions and local authorities. 21 from this point of view, as i will attempt to explain, the failure to share administrative regulatory powers can have a decisive impact on risk assessment at the national level in terms of the effectiveness/ineffectiveness of the strategies adopted by the various institutional actors called upon to manage the emergency in their own areas. here, by "sharing powers", i mean the idea that the institutional actors involved in the decision-making process cooperate in the exercise of their powers by adopting consistent measures in the public interest; that is to say, with the aim of maximising the rights of individuals as required by the italian constitution. 22 power sharing does not mean homologation. indeed, adopting different administrative strategies at different levels of government might increase the effectiveness of the response to a pandemic, but these measures must be shared among all of the actors involved in emergency management. sharing powers, measures and local strategies will be useful for an effective policy for containing the virus's nationwide spread based on an overall risk assessment. hence, the idea of shared powers emphasises the role of cooperation in specific institutional contexts, such as italy's, where competences are allocated across the different levels of government. the sense, more generally, is that sharing powers in multi-level systems enables states to perform better in terms of democracy, as powers are balanced between state and local levels. 23 as we will see, however, the absence of effective power sharing at all levels of government in a pandemic can produce serious problems in correctly assessing risk and consequently in the emergency management strategy. in particular, i will discuss the problem of the lack of effective power sharing in italian policies from two key points of view: the government's administrative strategy in addressing the virus's spread by means of an "incremental approach" (section iv.1.a); and the government's administrative strategy in implementing a national pandemic health plan (section iv.1.b). before doing so, i will outline some key legal issues for the topics examined in this article. in particular, to put the administrative strategy devised by the government in the covid-19 emergency into context, i will analyse: (1) the notion and features of emergency risk regulation from a pandemic perspective, distinguishing between risk and emergency; (2) the potential and limits of the precautionary principle in eu law; and (3) the italian constitutional scenario with respect to the main provisions governing government's, regions' and local authorities' powers. 21 this preliminary analysis of key legal issues is useful for understanding why the administrative strategy has proven ineffective in managing the pandemic (sections iv.1.a and iv.1.b). placing the notion and its main features in the context of a pandemic, we could define emergency risk regulation as the action undertaken in the immediacy of a pandemic in order to mitigate its impact. 24 from this perspective, we should bear in mind the distinction between risk and emergency. generally speaking, the traditional approach of administrative law refers to the notion of emergency and not also to the notion of risk, which legal doctrine touches on only marginally. 25 with regards to the emergency, as a safeguard clause to deal flexibly with pandemic risks, 26 governments and other public authorities may invoke the use of extraordinary powers to restore the normal course of legal relations. 27 what is more, regulators have used emergency tools to act in the expectation of a risk for many years, although there is no denying that a risk is a potential danger, whereas an emergency is an actual danger. indeed, it should be sufficiently clear that emergency power is ineffective when applied in a situation that is only potentially dangerous. in this connection, it has been argued 28 that the methods of exercising administrative powers can be better regulated by putting the administrative regulation in the category of risk rather than that of emergency. we might observe that if the notion of "risk" characterises a peculiar, intermediate state between security and destruction, 29 in "emergency risk" the balance between these two clearly tilts towards the latter. 30 in fact, as it is triggered by a pandemic, emergency risk regulation presupposes the existence, or the mere threat, of a pandemic. the pandemic, as 24 a alemanno (ed.), governing disasters: the challenges of emergency risk regulation (cheltenham, edward elgar 2011) p xix. 25 however, the notion of risk in italian administrative law is analysed by m simoncini, la regolazione del rischio e il sistema degli standard. elementi per una teoria dell'azione amministrativa attraverso i casi del terrorismo e dell'ambiente [risk regulation and the standards system. elements for a theory of administrative action through the cases of terrorism and the environment] (napoli, editoriale scientifica 2010) chs 2 and 4, where the author postulating the notion of risk argues and suggests, in an innovative approach, the transition from the "emergency" perspective to the "risk regulation" perspective. 26 . beck is responsible for analysing the sociopolitical dimension of risk management and in particular the problem of the relationship between science and society through the criticism of the monopoly that scientific rationality currently holds. 30 alemanno, supra, note 24, xxii. a possible cause of disaster for humans, is an event of substantial extent causing significant physical damage or destruction, loss of life or drastic change to the natural environment. 31 typically, one speaks of a pandemic when a threat to people's health is perceived that calls for urgent remedial action under conditions of uncertainty. 32 fundamentally, emergency risk regulation in a pandemic event, as in other disasters, finds its natural regulatory space in two stages: mitigation and emergency response. 33 in principle, mitigation efforts attempt to reduce the potential impact of a pandemic before it strikes, while a pandemic response tends to do so after the event. however, the distinction between emergency mitigation and emergency response is not always very sharp. when called upon to act under the menace of a pandemic, governments must both mitigate and respond to the threat in a situation characterised by suddenness (emergency) and significance. 34 in a pandemic, emergency risk regulation is clearly called on to operate in the initial phase of the disease's spread, when the mere threat overshadows the regulatory context by virtue of its status as an emergency. accordingly, the most cost-effective strategies for increasing pandemic preparedness with administrative regulation, especially in resource-constrained settings, may consist of: (1) investing to reinforce the main public health infrastructure; (2) increasing situational awareness; and (3) quickly containing further outbreaks that could extend the pandemic. in addition, especially once the pandemic has begun, a coordinated response should be implemented where the public regulator focuses on: (4) maintaining situational awareness; (5) public health messaging; (6) reducing disease transmission; and (7) care and treatment of the ill. successful contingency planning and an administrative strategy using the emergency risk regulation approach call for surge capacity, or in other words the ability to scale up the delivery of health interventions in proportion to the severity of the event, the pathogen and the population at risk. 35 the pandemic may produce significant impact on the regulatory context by justifying the partial or total suspension of the ordinary decision-making process. 36 departures from the rule of law, or simply from established procedures, are generally perceived as necessary if the event has met the significance threshold. however, the use of emergency administrative measures, such as temporary and exceptional measures, should be considered legitimate only for the period in which the pandemic 31 ibid, xxii-xxiii. see also dd caron, "addressing catastrophes: conflicting images of solidarity and self interest" in dd caron and ch leben (eds), lasts. 37 by contrast, prolonging exceptional order beyond the time of the pandemic means that any powers and measures designed to be temporary will be made permanent, intensifying the controlling authority's capacity, even though this might limit the enjoyment of individual rights. 38 in addition, if the general need to prevent a pandemic cannot be ignored, it should be well thought out as an opportunity for risk regulation to prevent not only the sudden impact of a pandemic situation, but also any distorting effects or mishandling of the necessary recourse to emergency powers. consequently, it might now be inferred that emergency risk regulation in the context of a pandemic is a relevant regulatory methodology that combines the risk approach with the possibility of resorting to extraordinary measures in case a pandemic occurs. this methodology is essential for an effective administrative strategy for dealing with a pandemic because it permits constant monitoring and management of risks that can have serious consequences for society. by assessing the risks and taking proportionate measures, the negative effects of the emergency can be reduced and the use of emergency powers can be limited. indeed, it should be pointed out that the principle of reasonableness, which is generally invoked in the exercise of emergency powers against immediate danger, does not operate in emergency risk regulation. instead, as i will claim later, it will be the precautionary principle that matters (section iii.2). furthermore, it must be said that emergency risk regulation entails an accurate assessment of the factual situation based on scientific evidence. 39 to apply this methodology correctly, a variety of factors must be consideredincluding the real level of the threat as well as how people perceive itin a step-by-step analysis based on the available scientific knowledge. in particular, as i will claim in analysing the italian policies (sections iv.1.a and iv.1.b), the administrative strategy for effectively implementing emergency risk regulation in a pandemic requires power sharing across the different levels of government with the participation of all of the institutional actors involved in the decision-making process in order to adopt consistent measures based on the constant monitoring and updating of the nationwide epidemiological risk assessment. hence, effective sharing of administrative powersand more specifically the administrative regulatory powers for emergenciesbetween the government, regions and local authorities would optimise the adoption of proportionate measures for controlling and containing the virus throughout the country, avoiding or at least delaying the application of stringent measures such as the lockdown of municipalities, provinces, regions or entire states. 37 g martinico and m simoncini, "emergency and risk in comparative public law" (verfassungsblog, 9 may 2020) . according the authors, it is the facts and not the law that indicate the conclusion of an emergency. thus, the risks posed by the use of extraordinary administrative measures should be considered, especially at the end of the emergency when the government's powers should be subject to legal control in order to avoid departures from original objectives. in the same sense, see also simoncini, supra, note 27, 39. 38 on the state of exception, see c schmitt, die diktatur: von den anfängen des modernen souveränitätsgedankens bis zum proletarischen klassenkampf (berlin, duncker & humblot 1989). schmitt's jurisprudential thinking placed the state of exception at the very centre of analysis, beginning with his work on the roman dictatorship. 39 martinico and simoncini, supra, note 37. in managing the pandemic, the government's administrative strategy should take the emergency risk regulation methodology we have just outlined into account. in the eu legal system, the precautionary principle 40 is described in article 191(2) tfeu on environmental policy. 41 the jurisprudence of the european court of justice (ecj) played a prominent role in elevating the precautionary principle to the status of a general principle of eu law. some ecj judgments in health matters are seminal in this regard. 42 according to the ecj's jurisprudence, the precautionary principle requires that competent authorities adopt appropriate administrative measures to prevent specific potential health risks. the ecj's approach maintains that an appropriate application of the precautionary principle presupposes the identification of hypothetically harmful effects for health flowing from the contested administrative measure, combined with comprehensive assessment of the risks to health based on the most reliable scientific data available. 43 in like manner, the european commission (ec) has contributed significantly to outlining the features of the precautionary principle in the eu legal system. in the communication of 2000, the ec sought to establish a common understanding of the factors leading to recourse to the precautionary principle and its place in decisionmaking. 44 according to the ec communication, the principle covers those circumstances where scientific evidence is insufficient, inconclusive or uncertain, but where preliminary scientific evaluation provides reasonable grounds for concern that the potentially dangerous effects on human health might be inconsistent with the chosen level of protection. 45 various factors can trigger the adoption of precautionary measures. these factors inform the decision on whether to act or not, this being an eminently political decision, a function of the risk level that is "acceptable" to the society on which the risk is imposed. 46 the ec has also established guidelines for those situations where action based on the precautionary principle is deemed necessary in order to manage risk. in these situations, a cost-benefit analysis to compare the likely positive and negative effects of the envisaged action and of inaction is recommended, and it should also include non-economic considerations. 47 however, risk management in accordance with the precautionary principle should be proportionate, meaning that administrative measures should be proportional to the desired level of protection. in some cases, an administrative response that imposes a total ban may not be proportional to a potential risk; in others, it may be the only possible response. in any case, such measures should be reassessed in the light of recent scientific data and changed if necessary. in eu law, therefore, the precautionary principle has been widely recognised as a defining principle of risk regulation alongside the regulatory aim of a high level of protection. nevertheless, this principle might prove ineffective or even harmful if applied in a "strong" form. the strong form of the principle has been authoritatively criticised 48 on the grounds that it suggests that regulation is required whenever there is a potential risk to health, even if the supporting evidence is conjectural and the economic costs of administrative regulation are high. in particular, if governments adopt the strong form of the principle, it would always require regulating activitiesconsequently imposing a burden of proof each timeeven if it cannot be demonstrated that those activities are likely to cause harms. 49 in addition, as the need for selectivity of precautions is not simply an empirical fact but is a conceptual inevitability, no society can be highly precautionary with respect to all risks. 50 44 hence, in this strong form, the precautionary principle proves ineffective and even harmful by requiring stringent administrative measures that can be paralysing, in that they prohibit regulation and all courses of action, including inaction. thus conceived, this principle may not lead in any direction or provide precise guidance for governments and regulators. recently, the limits of the precautionary principle have been analysed in the field of administrative and constitutional law. an interesting recent work proposes that precautionary and optimising constitutionalism are a dichotomy. 51 in summary, the theory advances two distinct propositions. the first is that constitutions should be viewed as devices for regulating political risks. those political risks are referred to as "second-order risks", as opposed to "first-order risks" such as wars, diseases and other social ills. 52 many of these risks are described as "fat-tail risks" that are exceedingly unlikely to materialise, but more likely than in a normal distribution, and are exceedingly damaging if they do materialise, as in the case of a pandemic. 53 under "maximin constitutional" approaches, it is suggested that precautionary rules can overcompensate for these low-likelihood risks and even cause the very dangers that they seek to prevent. 54 hence, precautionary constitutionalism is myopic in focusing on certain risks, and the notion of unappreciated or unaccommodated risks is central. on the basis of this hypothesis, the best way to regulate risk is thus to avoid obsessive views on risk avoidance or precautions and instead to allow greater flexibility in addressing the full array of risks inherent in government. 55 what vermeule calls "optimising constitutionalism" is an answer to those who frame their understanding of the constitution along more rigid precautionary principles. 56 vermeule's approach has been criticised. 57 following these criticisms, i believe that this approach also reveals some critical points about the notion of risk. unless one adopts a more fungible notion of risk, i do not believe that "precautionary constitutionalism" is suboptimal for risk. it depends on how one weighs the risks involved in governing, even if one accepts risk analysis as the best measure for the success of a constitutional system. i claim, more generally, that correctly applying the precautionary principle, although it works better in a context of risk rather than one of emergency, is nonetheless important in managing a pandemic because it makes it possible to delay the implementation of stringent emergency measures. we have emphasised that administrative precautionary measures, unlike emergency ones, do not suspend the rule of law, since they activate soft government regulation that does not jeopardise fundamental rights concurrent with those threatened by imminent danger. hence, in my opinion, precautionary measures, where they are effectively shared across the different levels of government through appropriate risk assessment, would serve to avoid or at least delay governments' activation of a state of emergency. activating a state of emergency, consequently, would trigger hard government regulation through emergency measures that suspend the rule of law and therefore jeopardise fundamental rights. in a particular context such as the covid-19 pandemic, the precautionary principle could also be invokedand the implementation of precautionary administrative measures would be usefulin the presence of an emergency declaration issued by governments. in this sense, i argue that the declaration of a state of emergency for a pandemic is based on a technical risk assessment (ie technical discretion 58 ) by the administration (eg government). in a pandemic, then, the emergency relates essentially to the capacity of administrations (eg governments, health authorities) to manage cases requiring healthcare (eg intensive care for respiratory support, hospitalisations for advanced pharmacological treatments and so on). thus, the subject of the technical assessment of the fact (the pandemic) is be provided by the evaluation relating to the administration's capacity to fulfil the tasks established by the legal system to protect the right to health enshrined in article 32 of the italian constitution (section iv.2). furthermore, to be effective in emergencies such as a pandemic, the notion of the principle to which i refer should not entail the activation of precautionary measures typical of its strong version (which is exemplified in the well-known phrase "better safe than sorry"). in its strong version, in fact, the precautionary principle would be both paralysing and uneconomical, since it requires that any and all risks be prevented, even those that are least likely to occur or have been created artificially for 58 italian legal doctrine distinguishes between "administrative discretion" and "technical discretion" under the influence of ms giannini, il potere discrezionale della pubblica amministrazione political reasons (i am thinking here of george w. bush's preventative war doctrine) in order to justify stringent administrative measures issued by governments for purposes not necessarily related to the alleged risk. by contrast, balancing costs against benefits might provide the basis of a principled approach for making decisions in complex contexts, such as the italian legal system, where the current constitutional separation of powers can lead to an inadequate and incorrect assessment of risks and therefore to ineffective emergency management by the different levels of government. in any case, scientific evidence is an essential prerequisite for better regulation by acting on the precautionary principle. to be cost effective, governments should take precautionary administrative measures based on scientific knowledge and thus carefully assess the risks they intend to manage. taking the potential and limits of the precautionary principle from the perspective we have outlined above into account might have an impact on governments' ability to deal effectively with pandemic emergencies. this matters in the case of italy, where the current structure of the constitutional separation of powers between the government, regions and autonomous local authorities plays a crucial role in effectively managing the pandemic emergency. analysing the italian constitutional scenario can provide substantial guidance for understanding the legal structure of powers and competences of government, regions and local authorities and explain why assessing pandemic risk can be impacted by a given separation of powers. such an analysis can shed light on the administrative strategy implemented by the government in the pandemic and enable us to evaluate its effectiveness in managing covid-19 across the country. first of all, we should bear in mind that the italian constitution (from now on the constitution) does not explicitly refer to emergency power, except for a state of war (article 78). however, this power has traditionally been included in the typical powers that the constitution assigns to the government. in the constitutional system, the main rules governing the government's powers are established by articles 76 and 77. indeed, parliament does not have a monopoly on legislative power, and the government may also issue laws by two legal instruments that should be understood as extraordinary: legislative decree and decree-law. in particular, article 76 allows parliament to delegate its legislative power to the government, which in turn is given the power to issue legislative decrees. hence, the legislative decree is a form of delegated law-making power, where parliament may pass an enabling act entrusting the government to adopt one or more acts that have legal force. generally, the legislative decree is a legislative tool that is often deployed in all matters where a strong technical content is present. the second extraordinary instrument, the decree-law, is provided for by article 77. this is a form of law-making through emergency powers that the government may exercise in "exceptional cases of necessity and urgency" and under "its own responsibility". 59 the government can thus issuewithout an enabling act from parliament as required by the provisions of article 76administrative measures that have the force of ordinary laws. however, such administrative measures will lose their effects as of the date of issue if parliament does not transpose them into an ordinary law within sixty days of their publication. 60 with the major reform on "administrative federalism" enacted by law no. 3 of 18 october 2001, which amended title v of the constitution, italy rapidly devolved legislative and regulatory powers to the regions. 61 fundamentally, the constitutional amendment provided a new framework for the distribution of powers and competences between the national and local levels. 62 it established a new institutional structure by dividing legislative and administrative competences and powers across the different levels of government. 63 the amended articles of the constitution are the basis for the fundamental reform of administrative federalism. article 114 recognises local authorities (municipalities, provinces, metropolitan cities) and regions as autonomous entities of the state with their own statutes, powers and functions in accordance with the principles laid down in the constitution. article 117 establishes the role and legislative powers of the state and regions, indicating those matters for which the state has exclusive legislative power and those for which concurrent legislation of both the state and the regions is possible. the regions have exclusive power in all matters not expressly covered by state law. municipalities, provinces and metropolitan cities also have regulatory powers for the organisation and implementation of the functions attributed to them. specifically, article 117(3) establishes that the state and regions have concurrent power, and the regions have regulatory powers, in matters of public health. 64 in this connection, at the national level, parliament and government are called upon to: (1) adopt fundamental health principles by means of framework laws and guidelines; and (2) establish essential levels of healthcare. at the regional level, the regions implement: (1) general legislative and administrative activity; (2) the organisation of health facilities and services; and (3) the provision of healthcare based on specific local needs. article 118 provides for the subsidiarity principle, according to which all functions are exerted by municipalities, while the possibility remains to confer them to higher levels of government in order to guarantee the uniform implementation of spending functions across the country. article 120 guarantees national unity and the unitary nature of the constitutional system by providing for the government's substitution power. 65 according to article 120(2), the government can act for the regions and other local authorities if: (1) the latter fail to comply with international rules and treaties or eu legislation; (2) in the case of grave danger for public safety and security; or (3) whenever such action is necessary to preserve legal or economic unity and in particular to guarantee the basic level of benefits relating to civil and social entitlements, regardless of the geographical borders of local authorities. to this end, the law shall lay down the procedures to ensure that (4) subsidiary powers (ie the government's substitution power) are exercised in compliance with the principles of "subsidiarity" and "loyal cooperation". lastly, with regards to powers and competences in emergencies, it should be noted that in the italian legal system several authorities can introduce specific regulatory acts establishing administrative measures needed to deal with emergencies in accordance with the constitution. the power of ordinance has a particular role in managing emergencies, as it can be exercised in situations of necessity and urgency. in particular, the legal system provides for: (1) 66 as we will see, the structure of power just described highlights the problem of risk assessment among the institutional actors involved in the administrative decisionmaking process. though the current system of allocation of powers and competences to the regions and other local authorities might be an advantage in terms of correctly assessing and managing risk in their areas, at the national level, this system requires an effective sharing of powers and strategies between the centre and the periphery, where the measures of the regions and local authorities must be adopted in accordance with the measures advanced by the government, and vice versa. since correct risk assessment by an authority must take the characteristics of its area into accountdata on the epidemiological situation, for example, or on the average age of 65 the legal nature of the "state's substitution power" in italian legal doctrine has been extensively discussed. in particular, some scholars argue that art 120 provides a form of "administrative" substitution of the state over the regions, and that art 117(5) concerns "legislative" substitution. other scholars agree on the idea that art 120 provides the genus of substitution powers, whereas art 117(5) refers to one species of the genus, being a mere specification of art 120. however, the constitution seems clear on this point. as we have seen, the provisions of art 120 speak of the "government", while the provisions of art 117(5) speak of the "state". the population, and the capacity of the health system with regards especially to the availability of intensive care bedsit might be assumed that in the italian legal system's effective risk assessment could be facilitated by the specific competences established by the constitution for the regions and other local authorities in health matters. however, as i will argue, this is a theoretical advantage that works only if power is effectively shared between the different levels of government. in fact, in order to provide an adequate and correct risk assessment at the national level and take effective measures to contain and manage the pandemic, the current system needs powers and strategies to be shared between local authorities, regions and the government. sharing administrative powers at all levels of government is an important part of the task of states. 67 indeed, enhancing multi-level regulatory governance has become a priority in many eu states. for this reason, the eu supports sharing of administrative regulatory powers by encouraging better regulation at all levels of government, calling on the member states to improve coordination and avoid overlapping responsibilities among regulatory authorities. 68 in italy, until the adoption of constitutional law 3/2001, regulatory reform had been promoted, designed and implemented mainly at the national level. with the reform, as we have seen (section iii.3), such a centralised approach lost legal and political ground. at the same time, responsibilities for developing and implementing administrative regulation policies have not been explicitly allocated to either the state, the regions or the local authorities. hence, the responsibility for administrative regulation and regulatory reform lies with each of the levels of government in the matters where they exert legislative powers. in like manner, there is no overall competence at the central level to monitor and control regulatory reform programmes at the local level. accordingly, the new constitutional structure calls for effective sharing of administrative powers across the different levels of government. on the basis of the analysis carried out so far, i will now argue that the main problems of the italian administrative strategy for the covid-19 pandemic are due to the lack of effective sharing of administrative powers and, more specifically, to the failure to share regulatory powers across the different levels of government with the participation and cooperation of all institutional actors involved in the emergency decision-making process: the government, regions and local authorities. in particular, this problem 67 oecd, "the territorial impact of covid-19: managing the crisis across levels of government" (last updated 16 june 2020) . 68 the european committee of the regions (cor), "division of powers between the european union, the member states and regional and local authorities" (december 2012) . see also, oecd-puma, "managing across levels of government" (1997) . has impacted the risk assessment of the various authorities called upon to manage the health emergency. as a result, the problem has impacted nationwide risk assessment and, consequently, the management of the emergency at the national level, leading to the adoption of inconsistent measures by the various institutional actors involved in the administrative decision-making process. in particular, i discuss this problem in italian policies from two key points of view: the government's administrative strategy for managing the virus's spread by means of the "incremental approach" (section iv.1.a) and the government's administrative strategy for implementing the nationwide pandemic health plan (section iv.1.b). in doing so, i shall take into account the considerations presented above concerning emergency risk regulation (section iii.1), the precautionary principle (section iii.2) and the rules governing powers in the constitutional scenario (section iii.3). one of italy's main problems in relation to the ineffective sharing of administrative powers for managing the pandemic is clearly displayed in what i will call the "incremental approach". 69 this approach is essentially based on the "progressive" application of emergency measures by the government in order to manage the "exponential" spread of the virus. the italian administrative strategy for the pandemic is fundamentally founded on such an approach. in fact, as we have seen (section ii), the government addressed the pandemic by enacting several decrees (dpcms) that "progressively increased" restrictions in lockdown areas (red zones), which were then extended from time to time until they finally applied to the entire country in the national lockdown. in my opinion, although the incremental approach may be a correct application of the principle of proportionality, given the government's proportionate use of emergency powers in dealing with the pandemic, it is the result of an ineffective sharing of administrative regulatory powers between the government, regions and local authorities. indeed, the progressive enforcement of lockdown areas, which from time to time increased the extent and severity of the emergency measures, demonstrates the difficulty of governing the spread of the virus in the red zones rather than the effective implementation of a proportionate administrative strategy. and this is mainly due to the lack of effective cooperation between the government and the regions in exercising their respective emergency powers. from a general point of view, the incremental approach reveals the limited effectiveness of the national and local measures and strategies for managing and containing the pandemic when those measures and strategies are not shared. i argue that even the stringent national lockdown 70 is essentially the result of the ineffective sharing and planning of administrative measures and strategies for managing the pandemic across the different levels of government and especially, in 69 on this approach, see g pisano, r sadum and m zanini, "lessons from italy's response to coronavirus" (harvard business law review, 27 march 2020) . 70 dpcm of 9 march 2020 . this case, between the government and the regions. one can legitimately wonder whether the government can adopt an effective administrative strategy for managing the emergency without sharing and planning their measures with those of the regions. from this perspective, we can say that the government's incremental approach has proven ineffective in coping with the pandemic. i will now explain why in the following points. (1) regarding risk assessment for pandemics, the science 71 shows that the spread of covid-19 is rapid and exponential. consequently, the incremental approach does not work if it is not properly implemented with the effective participation of all institutional actors involved in managing the pandemic. scientific data 72 and statistics 73 on the spread of the virus were not predictive of what the situation would have been in the short and medium term. hence, a correct risk assessment of the virus's nationwide spread would have suggested that the administrative measures and, more generally, the strategies should have been shared among all players involved in the main strategy. very often, however, the government's strategy has not been in line with those of the regions, revealing an inadequate assessment of the risk that the virus would spread throughout the country, and thus the ineffective sharing of emergency powers. in fact, some important emergency measures implemented by the regions clearly contradict the government's main strategy. to take a few examples, 74 marche region ordinance no. 1 of 25 february 2020, issued pursuant to decree-law no. 6 of 2020, established measures that were more stringent than the government's, disregarding the latter's strategy. for this reason, the government contested the order before the court. 75 although a judgment in favour of the government was handed down and the challenged ordinance was suspended, the marche region legitimately adopted a new ordinance establishing emergency measures based on the same decree-law no. 6/2020, once again disregarding the government's strategy. another paradigmatic case is provided by a series of ordinances by the campania region aimed at imposing a more stringent lockdown at the local level than the lockdown established by the government at the national level. unlike the marche case, the ordinances of the campania region, although contested before the administrative judge, were not suspended, thus making the government's strategy ineffective. 76 consequently, in the absence of effective sharing and planning of the main strategy with the regions, the government had to 'increase' the emergency measures from time to time until finally imposing the stringent national lockdown. (2) in the absence of power sharing and strategies based on correct risk assessment at the national level, the government's incremental approach seems to have played a considerable role in people's behaviour, inducing them to make "bad choices". as the data show, 77 the government's incremental lockdown of municipalities, provinces and regions in northern italy induced masses of people to move towards the southern regions, spreading the virus to parts of italy that had not yet been affected. an emblematic case of this kind took place immediately after the dpcm of 8 march 2020 (see section ii) locked down lombardy and another fourteen provinces in northern italy, spurring thousands of people to flee to the south. such potential negative externalities, as well as other negative spill-overs or distortions, should have suggested that the government share its regulatory acts with those of the "target" regions (ie the northern regions), as well as with the other regions that could be indirectly jeopardised by the lockdown measures (ie the southern regions). alternatively, the government should have undertaken to coordinate the strategies of the regions and local authorities in order to enhance the adoption of effective control measures for people exiting the red zones and entering less affected regions. 78 more generally, in applying lockdown measures, the government should have shared and planned its strategy with the regions on the basis of a common risk assessment that took into account not only the regional territories, but the entire country. accordingly, the government should have established effective countermeasures together with all of the regions potentially involved in lockdown decisions to prevent the virus from spreading from high-risk to low-risk areas. an effective emergency response must be coordinated as a consistent system of actions taken simultaneously by the different actors involved in the decision-making process. (3) the government's incremental approach also revealed the problem of effectively sharing and planning precautionary measures (see section iii.2) across the different levels of government. the critical situation that arose because of the epidemic's severity called for effective testing of symptomatic and asymptomatic cases, as well as proactive tracing of potential positives across the country. on this point, these precautionary measures were supported by scientific data on the transmission of covid-19 by asymptomatic people. 79 the absence of a shared strategy for the adoption and implementation of precautionary measures proved particularly harmful in regions where the epidemic risk is higher. indeed, it is no coincidence that the outbreak spread so quickly in northern italy and especially in lombardy. in this region, the efficient public rail transport network 77 connecting urban areas, large numbers of commuters 80 and high levels of air pollution 81 are thought to have increased the incidence of infection. from this point of view, it is clear that risk assessment has been inadequate, and strategies have thus been ineffectively shared between lombardy and the government. the government should have promoted an effective precautionary strategy for health checks by sharing it with the strategies of the regions and ensuring efficient nationwide implementation on the basis of a global risk assessment. conversely, data on infections and deaths reveal that strategies were not shared effectively with the hardest-hit regions. (4) the incremental approach shows that most of the problems of administrative strategy are also motivated by political issues between parties governing regions and belonging to the coalition now governing the country. from the time when the virus began to spread, the multi-level management of the emergency has triggered competition and institutional division between the government and regions 82 due to policymakers' political differences. the management of the pandemic, in fact, has thrown light on the deep political division between the government, led by the coalition of left-wing parties such as the democratic party and the five star movement, and the hardest-hit regions -lombardy and venetoled by traditionally right-wing populist parties such as the league and brothers of italy. in particular, many of the administrative measures taken by the regions were in contrast with the government's strategy, largely for political reasons. from this standpoint, it can be seen that there has been an "institutional clash" between the regional governments and the national government on the political and administrative actions to be taken to effectively manage the emergency. it is no coincidence that the government's minister of health is a member of one of the opposition parties in lombardy and veneto, and that the governors of lombardy and veneto belong to the coalition opposing the government. to give a few specific examples, a bitter dispute has occurred between prime minister giuseppe conte and attilio fontana, governor of lombardy and member of the rightwing populist party league, with regards to the ineffective management of the emergency in the region most affected by the virus. similarly, as we have seen, luca ceriscioli, governor of the marche region and member of the centre-left party in the majority coalition, opposed the government's decision to declare a state of emergency only in the northern regions. 83 in essence, these strong political divisions have impacted effective power sharing among the different levels of government, causing problems for the government's incremental administrative strategy. (5) the incremental approach also shows the important role that scientific competence plays in emergency management. 84 in this regard, one of the main goals of scientific expertise is to inform and legitimise governments' decisions, especially in high-uncertainty situations relating to public health. during the covid-19 outbreak, scientific and technical experts have assisted central and regional governments by contributing to the content of decisions and, more generally, of administrative emergency management strategies. as scientific evidence is the basis for sound political choices, scientific and technical experts have become part of the rationale of governments' decisions and have been useful in reassuring the public with concrete solutions. 85 indeed, in the immediacy of a pandemic, as is logical to assume, the demand for scientific expertise increases as governments search for certainty in understanding problems and choosing effective measures for managing the emergency. especially in the most delicate phases of an emergency, scientific expertise is useful in informing, legitimising and justifying government evaluations and responses to problems, even as political and administrative considerations continue to govern such choices. the result is an increased reliance on scientific expertise and politicisation of scientific and technical information. 86 by invoking scientific expertise, policymakers create the need for what is perceived as evidence-based policymaking, which suggests to the public that political and administrative decisions are based on reasoned and informed judgments 87 aimed at ensuring the public interest and guaranteeing individual rights. however, a major problem is that scientific expertise might obscure the accountability of decisions. as scientific and technical experts serve to inform and legitimise political and administrative decisions, they may also obscure responsibility for policy responses and outcomes. 88 scientific expertise helps to establish the severity of a pandemic in a population, to understand the epidemiological trend over time and to evaluate the effects of political and administrative measures, from mitigation to suppression. nonetheless, undertaking policy actions is the responsibility of government leaders. as scientific expertise becomes more prominent in the policy process, who is accountable for policymaking becomes more obscure. 89 to work better in emergencies, scientific expertise also requires effective sharing of administrative powers based on accurate risk assessment, as i will now explain. in italy, since the beginning of the virus's spread, the various institutional actors, especially the government and the regions, have established their own scientific task forces to support administrative measures and strategies in managing the pandemic. the main problem is that, by doing so, risk assessment at the national level is fragmented. conflicts can also arise between institutional actors involved in the 84 decision-making process. in this scenario, indeed, the government and the regions have adopted administrative decisions and strategies based on the risk assessments provided by their own central and regional task forces. it should be noted that this situation, like others discussed here, derives from the current constitutional architecture of separation of powers where the decision-making process is assigned to the different levels of government. however, managing a pandemic requires a comprehensive risk assessment. italian policies matter, as they show how, at the beginning of the pandemic, some regions' task forces underestimated covid-19, while other regions gave it a certain importance. this behaviour on the part of policymakers was not led by the government, which, on the contrary, criticised the regional governments' solutions. the outcome, as i claimed for the incremental approach, is that the government's measures and strategies are not shared with those of the regions and vice versa, and policymakers' accountability is obscured by invoking scientific expertise for pandemic management decisions. b. implementing the national pandemic health plan there is no doubt that a pandemic affects the whole of society. no single organisation can effectively prepare for a pandemic in isolation, and uncoordinated preparedness of interdependent public organizations will reduce the ability of the health sector to respond. 90 a comprehensive, shared, coordinated, whole-of-government approach to pandemic preparedness is required. 91 the government's strategy, as we have seen in the incremental approach to dealing with the emergency, proved particularly ineffective due to the failure to share administrative powers with the other institutional actors involved in the pandemic decision-making process, particularly the regions. but this, as we shall see now, was not the only weak point. i will argue here that another of the major problems was the lack of effective implementation of the national pandemic health plan. in particular, we will see how and why the ineffective implementation of the plan by the government, regions and local authorities posed serious problems for containing the spread of the virus and, more specifically, for avoiding the collapse of the public healthcare system. on this point, one of the main problems for public health posed by the novel coronavirus is its ability to spread with exceptional ease and speed, 92 threatening to overwhelm the healthcare system. in particular, what should be especially clear from the data is the critical situation of the intensive care system in italy, 93 which has been severely weakened by the pandemic. 94 intensive care system at the national level, cooperating with the regions and local authorities to ensure that critical care bed availability is efficiently managed. in this case, effective actions shared among all institutional actors and based on an adequate and accurate risk assessment at the national level would avoid saturating the intensive care system in the medium and long term, while the government should be able to increase capacity in the short term. yet, the data on the intensive care system show that the situation was inefficiently managed in the regions hardest hit by covid-19, especially in lombardy, which paid a high price at the local level for the ineffective implementation of the pandemic health plan at the national level. more generally, it should be emphasised that this point also demonstrates the importance of sharing administrative powers between government, regions and local authorities to implement the pandemic management plan effectively throughout the country. in this connection, many elements based on scientific and epidemiological data demonstrate that the covid-19 pandemic called for effective cooperation and coordination across all levels of government. in addition, it must be borne in mind that fighting a pandemic hinges on many factors, most of which are time consuming or in any case cannot be accomplished quickly. preparing a candidate vaccine, for example, takes a long time in terms of both preclinical and clinical development. likewise, developing and testing an effective drug involves complex multi-stage clinical trials. such considerations might be sufficient on their own to justify taking effective actions to mitigate the pandemic emergency's impact on the public healthcare system. in this phase, as we have seen, emergency risk regulation requires that regulatory action be taken in the immediacy of an emergency in order to mitigate its impact (section iii.1). to avoid the collapse of the public health system, the government should thus have contained the spread of the virus by effectively implementing the nationwide pandemic management plan with the participation of all institutional actors. the who has recognised the importance of sharing administrative powers through the participation and cooperation of the various institutional actors involved in the strategy against pandemics. in this regard, the who has drawn up specific guidelines 95 for implementing a pandemic influenza preparedness plan 96 that states should apply in order to manage the spread of the virus throughout their territories. in particular, the who's guidelines encourage states to develop efficient plans, based on national risk assessments, with the effective participation of institutional actors at all levels of government. in italy, the most serious problem is that the government, although it had already developed its own national plan, 97 foster its effective adoption by the regions and local authorities, disregarding a crucial point of the who's guidelines. consequently, the failure to implement the national pandemic plan, as we have seen, created the conditions for the collapse of the public health system, with the overcrowding of intensive care units and the consequent loss of life. *** in conclusion, the italian policies regarding the covid-19 outbreak can demonstrate the importance of: (1) rethinking the incremental approach; and (2) implementing a national health plan for pandemics by sharing powers, and more specifically administrative regulatory powers for emergencies based on an adequate and accurate risk assessment at the national level, among the different levels of government with the participation, cooperation and coordination of all institutional actors involved in the pandemic decision-making process. as we have seen, sharing administrative powers at the different levels of government plays a particularly important role in managing emergencies in the constitutional scenario, where competences are distributed between government, regions and local authorities, and several institutional actors are allowed to adopt regulatory acts (see section iii.3). the major changes that the constitutional amendments have brought to policymaking in the italian legal system require that constant support be provided to the regions and local authorities, especially in emergencies. despite significant decentralisation, the government still has a fundamental role to play in sharing and coordinating administrative powers at the different levels of government and in ensuring loyal cooperation among all of the institutional actors involved in emergency decisionmaking processes. indeed, the government is tasked with promoting and coordinating "action with the regions" (article 5 of law 400/1988), as well as with advancing cooperation "between the state, regions and local authorities" (article 4 of legislative decree 303/1999). 99 similarly, the government must promote "the necessary actions for the development of relations between the state, regions and local authorities" and ensure the "consistent and coordinated exercise of the powers and remedies provided for cases of inaction and negligence" (article 4 of legislative decree 303/1999). looking at the constitutional perspective, some possible solutions might be proposed. (1) in the italian constitutional scenario, although concurrent power to legislate on matters of public health is vested in the state (ie the government) and the regions pursuant to article 117(3), the state (ie the government and the regions together), on the basis of the principle established by article 32(1), "safeguards health as a fundamental right of the individual and as a collective interest". i argue, more specifically, that safeguarding health is a task of the state based on the fundamental principle of the constitution referred to in article 3 (2) , where the duty of the state is to "remove those obstacles of an economic or social nature" that, by constraining 99 legislative decree 303/1999 . the "freedom and equality of citizens", impede the "full development of the human person and the effective participation of all workers in the political, economic, and social organisation of the country". thus, i believe that under the joint interpretation of article 3 (2) and article 32 of the constitution, as well as the principle of loyal cooperation, the government and the regions must act by sharing administrative powers (and strategies) among them in order to protect the fundamental right to health. in so doing, the government can play an essential role in promoting institutional balance and cooperation between the national and local levels, maximising loyal cooperation and implementing vertical and horizontal subsidiarity. (2) sharing administrative powers for emergencies can also be encouraged and enhanced through the effective implementation of constitutional tools, such as the system of conferences based on the principle of loyal cooperation. (a) the conference on the relationships between government, the regions and the self-governing provinces is the key legal tool for multi-level political negotiation and collaboration. it serves in an advisory, normative and planning capacity and acts as a platform facilitating power sharing. (b) the conference on the relationships between government and the municipalities coordinates relations between the government and local authorities through studies, information and discussion of issues affecting local authorities. (c) the permanent conference on the relationships between government, the regions and the municipalities deals with areas of shared competence. 100 (3) in order to "safeguard health as a fundamental right of the individual and as a collective interest", article 120(2) of the constitution could be applied whenever it is necessary to guarantee "the national unity and the unitary nature of the constitutional system". i claim that this provision, which establishes the government's administrative substitution power, provides for the centralisation of administrative powers in specific cases contemplated by the constitution. in this sense, article 120(2) lays down that the government can act for the regions and/or local authorities in cases of "grave danger for public safety and security". in the light of this definition, the government's substitution for the regions and/or local authorities might be invoked as a result of the "grave danger for public safety", as well as in order to preserve "economic unity" and guarantee the "basic level of benefits relating to civil and social entitlements". in my view, however, the government should exercise its power of substitution as an extrema ratio whenever effective sharing among all of the institutional actors has not been implemented. article 120(2) is clear in this regard, requiring that the substitution power be exercised in compliance with the principles of "subsidiarity" and "loyal cooperation". 100 italy's national pandemic plan was adopted through the permanent conference on the relationships between central government, the regions, municipalities and other local authorities . administrative powers"and more specifically the administrative regulatory powers for emergenciesbased on an adequate and accurate risk assessment, across the different levels of government with the participation, cooperation and coordination of all institutional actors involved in the emergency decision-making process: the government, regions and local authorities. fundamentally, i emphasised that the italian case reveals the importance of sharing administrative powers from two main points of view. first, i argued that the "incremental approach" to dealing with the emergency, although based on the proportionate use of powers, is largely ineffective or even harmful in the absence of cooperation among all actorsthe regions and local authoritiesinvolved in the main strategy implemented by the government (section iv.1.a) . second, i discussed the importance of cooperation between the government, regions and local authorities for the effective and efficient implementation of a nationwide pandemic health plan (section iv.1.b). i suggested that these points be viewed from a constitutional perspective in order to propose some possible solutions. from this perspective, the problems of effective sharing of administrative powers across the different levels of government could be resolved by systematically interpreting the constitution and implementing specific constitutional tools provided by the legal system (section iv.2). in conclusion, more generally, i argue thatand this is the main thrust of the articleadministrative powers should be shared across the different levels of government based on an adequate and accurate risk assessment with the participation and cooperation of all of the institutional actors involved in the emergency decision-making process in order to safeguard the fundamental rights enshrined in the constitution as well as in eu and international law. in pandemics, this aim must be achieved not only to guarantee the right to health, but also to safeguard all of the rights that might be jeopardised by the exercise of administrative powers and, more specifically, the exercise of emergency powers in dealing with the pandemic. the strong measure of "lockdown", for example, should be the extrema ratio of administrative powers because it suspends the rule of law and jeopardises rights. indeed, as i have claimed in analysing the italian policies, sharing powers with effective cooperation between government, regions and local authorities in managing the pandemic would optimise the adoption of nationwide virus containment measures, avoiding or at least delaying the application of stringent emergency measures such as the lockdown of municipalities, provinces, regions or even the entire country. taking into consideration the correct application of emergency risk regulation (section iii.1) and the precautionary principle (section iii.2), although lockdowns aim to contain specific areas that are most affected by the virus, they must be proportional to the risk that they intend to curtail. when such measures are adopted to protect the right to health, as is the case in a pandemic, this right must be balanced with other rights. yet, if administrative powers are not shared effectively across the different levels of government, the balancing principle might be disregarded by jeopardising one or more rights without legitimate justification (eg the right to freedom of movement enshrined in article 16 of the constitution). this is the problem that the italian policies bring to light: a problem that i believe that the government must take into account in the near future as it strives to manage covid-19 and other similar pandemics. perspectives on the precautionary principle les avatars du principe de precaution en droit public le principe de précaution en droit communautaire: stratégie de gestion des risques ou risque d'atteinte au marché intérieur? the legal origins of the precautionary principle are to be found in the vorsorgeprinzip established by german environmental legislation in the mid-1970s; see there is a close relationship between the two principles that has led some to argue that they may be used "interchangeably". however, other authors contend that the prevention principle applies in situations where the relevant risk is "quantifiable" or "known" and there is a certainty that damage will occur. in this sense, see, respectively, wt douma principio di prevenzione e novità normative in materia di rifiuti dal pericolo al rischio: l'anticipazione dell'intervento pubblico" [from danger to risk: the anticipation of public intervention] (2010) 2 diritto amministravio 355. 42 ecj case t-13/99 pfizer animal health sa v council in the same sense, see also case c-157/96 national farmers' union case c-180/96 united kingdom v. commission [1998], ecr i-2729 case c-236/01 monsanto agricoltura italia art 77 of the italian constitution