key: cord-126419-u61qc8ey authors: qi, chong; karlsson, daniel; sallmen, karl; wyss, ramon title: model studies on the covid-19 pandemic in sweden date: 2020-04-03 journal: nan doi: nan sha: doc_id: 126419 cord_uid: u61qc8ey we study the increases of infections and deaths in sweden caused by covid-19 with several different models: firstly an analytical susceptible-infected (si) model and the standard susceptible-infected-recovered (sir) model. then within the sir framework we study the susceptible-infected-deceased (sid) correlations. all models reproduce well the number of infected cases and give similar predictions. what causes us deep concern is the large number of deaths projected by the si and sid models. our analysis shows that, irrespective of the possible uncertainty of our model prediction, the next few days can be critical for determining the future evolution of the death cases (updated april 02). the fast spread of covid-19 (nearly 1m infected cases till april 2nd, 2020) has caused wide concern. within the basic research community, quite a few mathematical and physical models have been proposed [1] [2] [3] [4] to study the evolution of the infected cases, aiming to make reliable predictions and to help the governments to make proper strategic preparedness and response plans. we deem it is of special importance to study the covid-19 spreading in sweden where, unlike other countries, the government is taking a rather relaxed strategy with no massive testing on suspected individuals and no strict lockdown in her most affected regions. we start by introducing the sir (susceptible-infectedrecovered) model which is widely used for virus spreading predictions [5, 6] . it consists of a system of three time-dependent variables: • infected cases (number of total infected individuals at given time), i(t). • susceptible cases (number of individuals susceptible of contracting the infection), s(t). • recovered cases (cumulative number of recovered individuals), r(t). one has in total n = s(t) + i(t) + r(t). above quantities satisfy the following non-linear differential equations where λ is the transmission rate, β the recovery rate. they can be parametrized using known infection data. it should be straightforward to extend the models to include deceased cases in the form dd dt = νi (2) and the exposed cases (individuals who are already infected but asymptomatic) if the recovery rate is very low during the pandemic time interval (as it is indeed the case for covid-19 upto now), we can well approximate the infected cases by with initial value i(0) = 1. for such si model, the solution can be derived analytically, this function is known as woods-saxon form in nuclear physics and is widely used describing nuclear potential and matter distribution. we can write it in a more general form by introducing one more parameter as for such si model, the solution can be derived analytically, where d describes the diffuseness. we apply above formula to study the reported cases in sweden as a function of time in fig. ii . we firstly assume that the death cases follow the same woods-saxon form where the three parameters can be fitted separately to reported data by taking into account the time interval between infection and death dates. the result is given in fig. 2 . we should emphasize that the uncertainty in the model is very large due to the limited data available. we were very optimistic when we first derived the curve from data back by two days which was very different with modest increases, as can be seen in fig. 3 . the above woods-saxon function seems to agree rather well with the data on reported covid-19 death cases from china where the pandemic period may be expected to be over. it may, however, not expected to work well within the late stage of the pandemic period when the total infected cases show a saturation behavior. therefore we insert eq. (6) to eq. (3) to see if we can get a better estimation on d. the integral of i(t) reads therefore, we can propose a second form for the evolution of d d(t) = a ln e t 0 −t d the result is given in fig. 4 where a modest increase is predicted. however, again, the data from the last two days show significant deviation from the predicted curve. new data from the next few days can be critical in pining down the uncertainty in the predicted behavior. we now include the recovery cases in above si model. there is no analytical solution but the evolution of the sir quantities can be done numerically. the result as of april 01 is given in fig. 5 what we can see from above simulation is that the recovery rate will remain very low during the expected pandemic time interval. instead we now include the reported death cases in above sir model by replacing the quantity r with d. the result as of april 01 is given in fig. 6 . the projected death cases are again very large and are more than 3000. our simulations show that all si, sir, sid models describe well the reported infected cases show rather modest increase in the near future which is very promising. however, the projected deceased cases in both si and sid models are extremely high even though the uncertainty is astonishingly large. we deem it urgent to explore the uncertainty of our model. the new data from the future few days can be critical for confining the predicted curve on decreased cases. if the model is correct, one should worry that the infected cases in sweden may be much much higher than it is reported today and a massive testing on exposed and suspicious cases may be urgent. estimating the number of infections and the impact of non-pharmaceutical interventions on covid-19 in 11 european countries a contribution to the mathematical theory of epidemics the mathematical theory of infectious diseases and its applications key: cord-029410-m19od0wj authors: scatti-regàs, aina; aguilar-ferrer, marta carmen; antón-pagarolas, andrés; martínez-gómez, xavier; gonzález-peris, sebastià title: clinical features and origin of cases of parotiditis in an emergency department() date: 2020-07-19 journal: an pediatr (engl ed) doi: 10.1016/j.anpede.2019.11.007 sha: doc_id: 29410 cord_uid: m19od0wj nan to the editor: the mumps virus (muv), or myxovirus parotiditis, continues to cause sporadic cases and outbreaks of disease. this is associated to the progressive waning of immunity against the mumps component of the measles, mumps, rubella (mmr) vaccine in absence of a natural booster (especially from 10 years after administration of the second dose), the use in the 1993---1999 period of a vaccine that had the rubini strain, which proved to be less effective, and the presence of pockets of unvaccinated people in the population. 1 in spain, 10 260 cases were notified in 2017 and 8996 in 2018, a significant increase compared to previous seasons. 2 some of the infectious agents other than muv that may be involved in parotitis as a general clinical presentation include influenza a virus, parainfluenza virus, epstein-barr virus (ebv), adenovirus, coxsackievirus, cytomegalovirus (cmv), parvovirus b19, herpesvirus and lymphocytic choriomeningitis virus, as well as gram-positive bacteria, atypical mycobacteria and bartonella species. 3---5 in the paediatric population, these pathogens are probably more frequent causative agents compared to muv. this, combined with the benign course of most presentations, leads many paediatric health care facilities to make the diagnosis without an aetiological investigation. the aim of our study was to establish the viruses involved in cases of parotitis in our area. we carried out a retrospective study through the collection of data corresponding to 2 full years (2016 and 2017), including all patients given a diagnosis of parotitis (with swelling of the parotid glands being a requirement for inclusion) in the paediatric emergency department of a tertiary ଝ please cite this article as: scatti-regàs a., aguilar-ferrer m.c., antón-pagarolas a., martínez-gómez x., gonzález-peris s. caracterización clínica y etiológica de los casos de parotiditis en un servicio de urgencias. an pediatr (barc). 2019. https://doi.org/10. 1016/j.anpedi.2019.11.004 care hospital in barcelona that manages patients up to age 16 years and based on diagnostic judgment of the paediatrician in charge of the patient. per hospital protocol, polymerase chain reaction (pcr) tests for detection of muv in saliva and urine samples were performed in patients with parotitis. serologic tests were added if blood tests were requested by the paediatrician in charge based on his or her clinical judgment. when it came to serologic testing, in case of negative results of the test for detection of muv in saliva, molecular methods were used for detection of influenza a and b virus, respiratory syncytial virus a/b, adenovirus, metapneumovirus, coronavirus nl63/oc43/229e, enterovirus, rhinovirus, parainfluenza virus, ebv and cmv. mump viruses were characterised by partial sequencing of the small hydrophobic (sh) gene. we identified 169 cases of symptomatic acute parotitis (0.21% or paediatric emergency visits). the median age of the patients was 7.7 years (range, 11 months-16.8 years). the rate of adherence to the protocol for the ordering of tests for aetiological diagnosis was 79.3%, so we were able to obtain data on testing of saliva samples from 134 patients. fig. 1 summarises the results of pcr testing of these samples. another 5 patients received an aetiological diagnosis of parotitis due to muv by serologic testing (positive igm test), adding up to a total of 18 cases caused by muv. the median age of patients with muv infection (in all cases muv genotype g) was 14.3 years (range, 18 months-16.8 years), with a predominance of the male sex (72.2%). in 3 cases (16.7%) there was no known history of contact with a case of parotitis. all patients were correctly vaccinated save for 2 children that had not received any dose of mmr by parental choice and 1 adolescent that had only received 1 dose of vaccine. there were no documented complications, except for 1 patient that developed guillain-barré syndrome with onset the week after the initial visit, who had a favourable outcome. the management of 19.1% of the patients included empiric antibiotherapy despite there being no evidence confirming bacterial infection. table 1 presents the demographic and clinical characteristics of cases of parotitis in which testing was performed for investigation of the aetiology. patients with muv infection were significantly older compared to children with a different aetiological agent (median age, 14.3 vs 6.5 years; p = .005). the findings in our study, despite the limitations intrinsic to its retrospective design, were consistent with those of other authors, and showed that a significant proportion of cases of parotitis in the paediatric age group may be caused by viruses other than muv (such as ebv, cmv and common respiratory viruses). 3---5 the high frequency of cases with negative results in all tests can be explained by the involvement of other viruses that were not included in the testing (such as human herpesvirus 6), technical factors affecting the yield of microbiological diagnosis and the potential presence of non-infectious parotitis cases, among others. viral coinfection was also frequent. lastly, we ought to underscore that muv continues to be a frequent cause of parotitis in our area (especially in older children), even in correctly vaccinated patients, and our findings confirmed that the causative virus continues to circulate in the community with a well-known pattern characterised by incidence peaks every 3---5 years. the aetiological diagnosis and notification of cases can alert the health care authorities of potential outbreaks at an early stage, allowing implementation of containment measures such as administration of a third dose of vaccine in selected patients. 6 waning immunity against mumps in vaccinated young adults non-mumps viral parotitis during the 2014-2015 influenza season in the united states letter to the editor: there is a need to consider all respiratory viruses in suspected mumps cases viral etiology of mumps-like illnesses in suspected mumps cases reported in catalonia effectiveness of a third dose of mmr vaccine for mumps outbreak control hospital infantil vall d'hebron, barcelona, spain b unidad de virus respiratorios, servicio de microbiología, hospital vall d'hebron key: cord-006328-0tpj38vb authors: dass hazarika, rashna; deka, nayan mani; khyriem, a. b.; lyngdoh, w. v.; barman, himesh; duwarah, sourabh gohain; jain, pankaj; borthakur, dibakar title: invasive meningococcal infection: analysis of 110 cases from a tertiary care centre in north east india date: 2012-07-22 journal: indian j pediatr doi: 10.1007/s12098-012-0855-0 sha: doc_id: 6328 cord_uid: 0tpj38vb objectives: to report an outbreak of invasive meningococcal disease from meghalaya, in the north east india, from january 2008 through june 2009. methods: retrospective review of case sheets was done. one hundred ten patients with invasive meningococcal disease were included for the study. results: of the total patients, 61.8 % were boys and 38.2 % were girls (boy to girl ratio = 1.62:1). the average age of presentation was 8.48 ± 5.09 y. meningococcal meningitis was seen in 61.8 % of cases, meningococcemia in 20 % and 18.2 % had both. fever was the most common manifestation (100 %) followed by meningeal signs (78.2 %), headache (56.4 %), vomiting (53.6 %), shock (38.2 %), low glasgow coma scale (gcs) (25.5 %), purpura and rashes (23.6 %), seizures (9.1 %), abdominal symptoms (4.5 %), irritability and excessive crying (4.5 %) and bulging anterior fontanalle (23 %) in those below 18 mo of age. raised intracranial pressure (icp) was the most common complication (28.2 %) followed by coagulopathy (16.4 %), hepatopathy (10 %), herpes labialis (9.1 %), syndrome of inappropriate adh secretion (siadh) (8 %), pneumonia (7 %), arthritis (6 %), purpura fulminans, respiratory failure, sixth nerve palsy and diabetes insipidus in 4.5 % each, subdural empyema, optic neuritis, ards and arf in 1.8 % each, cerebral salt wasting syndrome, third nerve palsy, cerebritis and hearing impairment in 0.9 % each. culture was positive in 35.5 %. patients were treated initially with ceftriaxone and dexamethasone but later on with chloramphenicol due to clinical drug resistance. mortality was 6.4 %. conclusions: this is the first epidemic report of invasive meningococcal disease from the north east india. chloramphenicol acts well in areas with penicillin or cephalosporin resistance. mortality reduces significantly with early diagnosis and prompt intervention. meningococcal disease is a global problem. it has a rapid onset with varied presentations and wide regional variation in disease pattern. the endemic disease is rare but epidemic form occurs commonly in many regions of the world especially described in the 'meningitis belt' in sub-saharan africa, parts of asia and also in india. meningococcal disease mostly affects children in the school going age and adults working in close contact such as in military barracks. the disease requires early and prompt antibiotic treatment and supportive therapy. the outcome of the disease depends on the time required to seek medical help i.e., the 'house to hospital time' and also on the rapidity of administration of the first antibiotic dose i.e., the 'door to needle time'. meghalaya situated at an altitude of 1,961 m above sea level has a predominantly rural tribal population. an epidemic of meningococcal disease occurred in this region during [2008] [2009] . the present study documents the occurrence of the disease in this part of the world, and also highlights the various clinical manifestations, laboratory findings and management outcome. this descriptive retrospective study over the period of the epidemic, january 2008 through june 2009 is being reported from the department of pediatric disciplines, nei-grihms, shillong. one hundred ten children diagnosed as either 'meningococcemia' or 'meningococcal meningitis' or 'meningococcemia with meningitis' during the study period were identified from discharge summaries and inpatient records. their charts were retrieved and reviewed thoroughly. at admission, blood and cerebrospinal fluid (csf) were sent to the laboratory immediately for culture and sensitivity testing, cytology and gram staining. complete blood count (cbc) and peripheral blood smear for malarial parasite, random blood sugar (rbs), liver function test (lft), coagulation profile, renal function test (rft), serum electrolytes and chest x-ray (cxr) were done on the day of admission in all patients and repeated periodically if necessary. mri brain was done when clinically indicated. the cases of meningococcal meningitis and meningococcemia in the present case series were labelled as probable meningococcal meningitis, confirmed meningococcal meningitis, probable meningococcaemia and confirmed meningococcaemia as per standard guidelines [1] . patients were treated for the first 6 mo with injection ceftriaxone but later with parenteral chloramphenicol due to observation of clinical drug resistance in the form of delayed or no response to ceftriaxone in 48-72 h. antibiotics were administered for a minimum of 7 d in all patients along with supportive care and monitoring. injection dexamethasone was used in all cases with meningitis for 2 d. shock was treated with normal saline and inotropes (dopamine and dobutamine), whenever indicated and hydrocortisone. a total of 110 children were diagnosed as having either 'meningococcemia' or 'meningococcal meningitis' or 'meningococcemia with meningitis'. the demographic profile and clinical presentations are outlined in table 1 . among these cases, 61.8 % were boys and 38.2 % were girls (boys:girls01.62:1). the mean age of presentation was 8.48 ± 5.09 y (4 mo-18 y). fever was the most common symptom (100 %) followed by headache (56.4 %), vomiting (53.6 %), altered sensorium (25.5 %), purpura and rashes (23.6 %), seizures (9.1 %), abdominal symptoms (4.5 %), irritability and excessive crying (4.5 %). meningeal signs were present in 86 cases (78.2 %) and bulging anterior fontanalle in 3 out of 13 cases (23 %) below the age of 18 mo. shock was seen in 42 cases (38.2 %) (29 compensated and 13 decompensated). the average number of isotonic saline boluses required was 40 ml/kg (range: 20 ml/kg to 100 ml/kg). fifteen cases (13.6 %) required inotropic support and hydrocortisone singly or in combination. the average duration of inotropic support was 24-72 h. the laboratory investigations of all the cases are summarized in table1. culture (either blood or csf) was positive in 39 cases (35.5 %) (csf: 29, blood: 13). in three cases (2.7 %), growth was seen both in the blood and csf. gram negative diplococci in csf was seen in 27 cases; of which 8 cases were culture negative. all cases were identified as serogroup a and were susceptible to ceftriaxone and chloramphenicol by in-vitro antimicrobial testing. the mean blood leukocyte count was 16,071±14525/cumm. the csf cell count ranged from 5 to 60,000/cu mm and hypoglycorrhacia were seen in 67.3 % of the cases. ten percent of the cases had deranged lft and 16.4 % had coagulopathy. majority of the cases were seen in the months of december 2008 and january to march 2009 (fig. 1) . sixty nine percent of the cases were seen in children above 5 y of age (fig. 2) . meningococcal meningitis and meningococcemia were diagnosed in 68 cases (61.8 %) and 22 cases (20 %) respectively with a corresponding mortality of 2.9 % (2/68) and 18.2 % (4/22). twenty children (18.2 %) presented with both meningococcemia and meningitis with 1 death. there was no difference in mortality or morbidity between the culture positive or culture negative cases. of the 68 children with meningococcal meningitis, 44 had probable meningitis while 24 were confirmed. of the 22 children with meningococcemia, 15 had probable meningococcemia while 7 were confirmed. of the 20 children with meningococcemia and meningitis, 12 were probable while 8 were confirmed (fig. 3) . the important complications have been summarized in table 2 . raised icp was the most common (28.2 %) and was diagnosed clinically by the presence of bulging anterior fontanelle, bradycardia/tachycardia, papilledema and hypertension. herpes labialis was observed in 9.1 % of cases. three important metabolic complications of meningococcal infection observed in the present case series were siadh (8 cases, 7.3 %), diabetes insipidus (5 cases, 4.5 %) and cerebral salt wasting syndrome (1 case, 0.9 %). all the cases with diabetes insipidus and cerebral salt wasting syndrome expired. meningococcal purpura fulminans were seen in 5 cases (4.5 %) whereas 6 cases (5.5 %) developed arthritis, and 2 cases each had subdural empyema and optic neuritis. mortality was 6.4 %. epidemic meningococcal disease was first described by vieusseaux in 1805 from switzerland [2] . meningococcal infections are commonly found in developing countries such as in the african meningitis belt and occasionally in developed countries like the united states. serogroup a is more prevalent in developing countries whereas, in the developed countries the disease is mostly caused by serogroup b and c [3] . in india, meningococcal disease is endemic in delhi with sporadic cases reported in the past [1] . isolated cases of meningococcal meningitis were also reported from several states of india involving haryana, uttar pradesh, rajasthan, sikkim, gujarat, jammu & kashmir, west bengal, chandigarh, kerala and orissa in 1985 [4] . most of these outbreaks have been caused by serogroup a [5] . n. meningitidis was the dominant pathogen isolated in surat between 1985 and 87 [6] . in early 2005, spurt of cases of neiserria meningococcemia and meningitis due to serogroup a have been reported from delhi and adjoining areas [7] . no previous reports exist from north east india. approximately 69 % were above 5 y of age. maximum cases reported were below 1-2 y of age from usa for endemic disease [8] . in epidemic outbreaks a shift to higher age occurs [9] . in sudan, 58 % were above 5 y in a group a -n. meningitidis outbreak [10] . in ghana however the peak incidence was found in 10-14 y old children [11] . neonatal meningococcal meningitis is rare and there was no case of neonatal meningitis in the present study. meningococcal infection is characteristically fulminant presenting with fever, severe headache, vomiting, neck stiffness, positive meningeal signs, photophobia, drowsiness and confusion. deterioration and death can occur in hours. the disease spectrum usually ranges from meningococcal meningitis to meningococcemia. meningitis may or may not be present with rash. seizures occur in 40 % of cases. meningococcemia is more abrupt presenting with chills, nausea, vomiting, myalgias and the classical purpuric or petechial rash with or without bullae formation. absence of meningitis is a poor prognostic factor. septicaemia was found in 20 % cases. urmila et al. from delhi reported that 67 % children had meningococcal meningitis, 20 % had meningococcemia and 13 % had both with mortality of 4.5 %, 25 % and 69 %, respectively [12] . this is similar to findings in the present study. shock was the presenting symptom in 38 % of the index cases. of these, 69 % had compensated and 31 % had decompensated shock compared to 26 % in other reports [12] . shock is endotoxin mediated and due to factors such as widespread capillary leak, loss of vasomotor tone and maldistribution of intravascular volume, impaired myocardial function and impaired cellular function. early recognition of shock is crucial for early intervention and improved outcome [13] . tachycardia may be the only sign present in the early phase of the disease and is enough to mandate fluid resuscitation. circulatory management aims to maintain tissue perfusion and oxygenation. repeated fluid boluses with 20 ml/kg of isotonic saline are to be given initially till shock resolves. in case shock persists after 60 ml/kg of fluid, central venous pressure (cvp) line is inserted and fluid resuscitation continued with addition of dopamine and/or dobutamine. some children require as high as 100-200 ml/kg of fluid resuscitation but such patients also require mechanical ventilation. about 13.6 % of the index cases required inotropic support either alone or in combination for an average duration of 24-72 h. some studies have shown that 4.5 % albumin is more useful as a resuscitating fluid [14] . albumin is routinely used in the uk with significant reduction in mortality in the last 20 y (decrease up to 2 %) in patients with meningococcal disease and albumin use may play a role along with other factors [15] . the authors do not have any personal experience of using albumin. survival rate reaches 94 % when shock is reversed within 75 min of presentation [13] . rash was observed in 23.6 %, while this sign ranged from 7.3 % to 100 % in other studies [16, 17] . meningococcal purpura fulminans is a hemorrhagic condition associated with meningococcal septicemia with features of hypotension, disseminated intravascular coagulation (dic), and purpura leading to tissue necrosis and small vessel thrombosis. in the present study, 5 cases (4.5 %) presented with purpura fulminans and of them 3 died. schaad ub [18] has described arthritis in 10 % of patients with meningococcal disease. in the present study, 5.5 % presented with arthritis involving big joints. arthritis may occur early in the disease due to direct bacterial seeding of the joints or in the sub-acute or convalescent phase of the illness secondary to immune-complex reactions. treatment of bacterial arthritis consists of analgesics, antibiotics and drainage of joint fluid if needed. immune complex reactions are usually treated with non-steroidal anti-inflammatory drugs or steroids. some may require intravenous immunoglobin [19] . reactivation of latent herpes simplex virus infections (primarily herpes labialis) is common during meningococcal infection as observed in the present study with good response to local acyclovir. coagulopathy is frequent and multifactorial, and was seen in 16.4 % of the present cases. mild clotting abnormalities are well tolerated. in severe cases fresh frozen plasma (ffp) is recommended. the authors have used intravenous vitamin k and if required ffp with good results. currently the best treatment for meningococcal related coagulopathy is the optimal management of shock. dodge and swartz [20] reported seizures in 10 % in the acute stage of the disease, focal cerebral signs in 10 %, and 15 of 39 patients had cranial nerve involvement early in the course of disease. in the present study, 9.1 % of the cases presented with seizures in the acute stage and cranial nerve involvement was present in 7.3 % cases. siadh was detected in 4 of 39 patients by dodge and swartz [20] . in the present study, siadh was found in 8 cases (7.3 %) and was managed with fluid restriction and low dose diuretic (furosemide) therapy. five cases (4.5 %) had diabetes insipidus (di), requiring aggressive management with hypotonic fluids, vasopressin and mechanical ventilation and one had cerebral salt wasting (csw). all the index patients with di and csw had 100 % mortality. although pollard rb [21] has reported that deafness has not been a common complication of meningococcal meningitis in the antibiotic era, there was one case with bilateral sensorineural hearing defect in the present study. pneumonia, epiglotitis and otitis media can occur. pneumonia is seen in 5 to 15 % of invasive meningococcal disease cases, particularly with serogroups y and w-135 [22] . in the present study, pneumonia was present in 6.4 % cases. recovery may be complicated by ards, anuria and multi organ failure. in some cases ards develops within a few hours after admission and in the present study 2 cases each developed ards and arf. in a study from punjab (ludhiana), 56.5 % were culture positive and all isolates were sensitive to most of the common antibiotics [23] . urmila j et al reported 26 positive cultures (13/98 blood cultures and 13/89 csf cultures) [12] . low rate of culture positivity in the present study (35.5 %) may be due to prior use of antibiotics outside or delay in transporting the specimen. antibiotic therapy remains the cornerstone of therapy in meningococcal disease. three factors that influence the success of antibiotic therapy are timing of the antibiotic, tissue penetration and antibiotic resistance. broad spectrum antibiotics like penicillin g, ceftriaxone and cefotaxime remain widely used. increasing resistance to penicillin is being reported and ceftriaxone remains the recommended first line therapy in the present scenario. however in the authors' experience they had patients with good response to ceftriaxone in the beginning of the epidemic. after about 6 mo of the epidemic, there was poor clinical response to ceftriaxone and the unit antibiotic policy was revised to intravenous chloramphenicol for 7 d with good response. they now routinely use parenteral chloramphenicol as the first line therapy in meningococcal disease. there are other reports of ciprofloxacin as well as ceftriaxone resistance from india [24, 25] . the second line therapy consists of vancomycin and azithromycin. the nice guidelines recommend dexamethasone therapy for suspected or confirmed bacterial meningitis above 3 mo of age [26] . the authors used injection dexamethasone in all meningococcal meningitis cases for 2 d. steroids are not indicated in meningococcal shock unless there is suspicion of hypoadrenalism. overall fatality rate of invasive meningococcal infection is 5-16 %, although these rates are difficult to assess as some studies only take into account meningococcal meningitis, while others reflect overall fatality from meningococcal disease [27, 28] . reported mortality from meningococcemia ranges from 18 % to 35 % [29] . for overall invasive meningococcal infection, the fatality rate in the present study was low (6.4 %). for meningococcemia, fatality rate in the present study was 18.2 % which is similar to other studies [29] . low mortality in the present study can be explained by the fact that patients reached the hospital fast due to good information, education and communication activities by the local health authorities, combined with a low threshold for diagnosis and aggressive management of shock, rapidity of administration of the first antibiotic dose (door to needle time) and continuous monitoring in a well equipped pediatric intensive care unit. this is the first epidemic report of invasive meningococcal disease from north east india. although the majority of patients had meningitis, the full range of manifestations were also seen. this study highlights that clinical resistance to commonly used antibiotics such as ceftriaxone can be seen where chloramphenicol is an alternative effective choice. mortality reduces significantly with early diagnosis and prompt interventions like early shock management, antibiotic therapy and frequent monitoring in an intensive care set up. although invasive meningococcal infection did not have much impact on the morbidity and mortality of children from this region compared to other parts of the world, it remains one of the major causes of life threatening infections requiring continuous vigilance. contributions rd conceived the idea of the study and approved the final manuscript and will act as guarantee of the paper; nmd, hb, sgd, pj and db were involved in data retrieval, analysis and writing of the paper; abk and wvl were involved in the laboratory diagnosis and analysis of the microbiological data. meningococcal disease, need to remain alert. cd alert mémoire sur la maladie qui a regné a genêve au printemps de 1804 meningococcal meningitis outbreak control strategies meningococcal meningitis in delhi and other areas group b meningococcal meningitis in india meningococcal meningitis in an industrial area adjoining surat citysome clinic-epidemiological aspects meningococcal disease: history, epidemiology, pathogenesis, clinical manifestations, diagnosis, antimicrobial susceptibility and prevention multicenter surveillance of invasive meningococcal infections in children update on meningococcal disease with emphasis on pathogenesis and clinical management clinical features and complications of epidemic group a meningococcal disease in sudanese children meningococcal meningitis in northern ghana: epidemiology and control measures clinical profile of group a meningococcal outbreak in delhi early reversal of pediatric-neonatal septic shock by community physicians is associated with improved outcome albumin: saint or sinner treatment of meningococcal infection meningococcal disease among children who live in large metropolitan area review of management of purpura fulminans and two case reports arthritis in disease due to neisseria meningitides immune complex reaction after successful treatment of meningococcal disease: an excellent response to ivig bacterial meningitis-a review of selected aspects. ii. special neurologic problems, post meningitic complications and clinicopathological correlations early bilateral eight nerve involvement in meningoccal meningitis connecticut, and selected areas meningococcal meningitis in ludhiana emergence of non-ceftriaxonesusceptible neisseria meningitidis in india ciprofloxacin-resistant neisseria meningitidis management of bacterial meningitis and meningococcal septicaemia in children and young people: summary of nice guidance epidemiology of bacterial meningitis meningococcal infection in children: a review of 100 cases prognostic factors in acute meningococcaemia role of funding source none. key: cord-144860-a4i9vnjz authors: nason, guy p. title: rapidly evaluating lockdown strategies using spectral analysis: the cycles behind new daily covid-19 cases and what happens after lockdown date: 2020-04-16 journal: nan doi: nan sha: doc_id: 144860 cord_uid: a4i9vnjz spectral analysis characterises oscillatory time series behaviours such as cycles, but accurate estimation requires reasonable numbers of observations. current covid-19 time series for many countries are short: preand post-lockdown series are shorter still. accurate estimation of potentially interesting cycles within such series seems beyond reach. we solve the problem of obtaining accurate estimates from short time series by using recent bayesian spectral fusion methods. here we show that transformed new daily covid-19 cases for many countries generally contain three cycles operating at wavelengths of around 2.7, 4.1 and 6.7 days (weekly). we show that the shorter cycles are suppressed after lockdown. the preand post lockdown differences suggest that the weekly effect is at least partly due to non-epidemic factors, whereas the two shorter cycles seem intrinsic to the epidemic. unconstrained, new cases grow exponentially, but the internal cyclic structure causes periodic falls in cases. this suggests that lockdown success might only be indicated by four or more daily falls in cases. spectral learning for epidemic time series contributes to the understanding of the epidemic process, helping evaluate interventions and assists with forecasting. spectral fusion is a general technique that is able to fuse spectra recorded at different sampling rates, which can be applied to a wide range of time series from many disciplines. what the impact of measures that came in on march 23 will be". the measures that professor mclean referred to were the widespread uk social distancing and lockdown interventions made in the face of the covid-19 threat. at the time of writing, few countries have experienced in excess of 70 days of covid-19 cases and most only have around 50 days. professor mclean is correct in that many scientific inferences require longer time series than those currently available. however, we show that there are considerable and useful similarities in the underlying cyclic (spectral) behaviours of the numbers of new daily covid-19 cases for a range of different countries (see extended data figures). we use recent bayesian spectral fusion methods [3] (regspec) to pool spectral information across countries, which provides significantly more accurate estimates of cyclic behaviour than provided by a typical spectral analysis of a single country alone. the bayesian principles underlying our fusion method handle mean that uncertainty is treated coherently, producing rational uncertainy assessment for our cycle (spectral) estimates. our methods produce cycle estimates using the equivalent of over nine hundred daily observations, compared to the fifty or so that a typical standard spectral analysis might use. using data [2] from all of the countries we considered, our results show that transformed new daily covid-19 cases have three underlying cycles: one operating at a wavelength of 2.7 days, a second at 4.1 days and a third at 6.7 days, which we take to be a weekly effect. we conducted separate analyses for the uk and groups of countries with similar spectra and note some variation in those cycles. for some purposes it is not reasonable to compare or pool the number of new daily cases from one country to another [14] . for example, different countries might use different definitions of the number of daily cases and they record cases through different national structures and this is even the case for countries with political, geographical or cultural similarities. however, as long as the method of recording cases is broadly unchanged over the period in question for a particular country, the spectral properties across countries are comparable. the transformed cases' spectrum quantifies the internal oscillatory structure within the series and is largely unaffected by the overall level of cases, the different start times of epidemics in different countries (phase) and country-specific internal delays due to reporting requirements (also phase). in addition, the demonstration of the presence three consistent cycles across all countries, with some variation, provides supporting evidence for the suitability of the transformed new daily cases as a target of analysis, and comparisons between and across countries, another topic of great current interest is to ascertain whether and how a lockdown will influence the number of new daily covid-19 cases. we consider this question for the group consisting of the uk, italy, france, germany, spain, switzerland, belgium and the netherlands. the number of days (with cases) be-fore lockdown is, on average, 22 for this group of countries, and, after lockdown, is 26 (except the uk, which started its lockdown later). the averages just quoted include allowance for a seven day incubation period. our analysis compares the spectral properties before and after lockdown. a spectrum based on about 25 days worth of data would provide a very poor and highly uncertain estimate. however, our spectral fusion methods [3] permit effective sample sizes for the group of 192 days worth of data prior to the lockdown, and 196 after, resulting in highly accurate spectral estimates for these periods. we learn that, after lockdown, the weekly cycle remains strong, but the cycles operating around 2.7 and 4.1 days become suppressed. this indicates that the weekly cycle is due, at least in part, to administrative recording effects, which are not effected by the lockdown, whereas the 2.7 and 4.1 day cycles might be related to virus dynamics, which is certainly affected by lockdown. the discovery of how the high-frequency cycles are disrupted by full lockdown suggests that they could be monitored during partial lockdowns. for example, if schools are reopened and the 2.7 and 4.1 day cycles do not reappear, then this might indicate the effectiveness of that strategy. given the similarity of the cycles across countries, this indicates that cases could be monitored and pooled across regions, over a short number of days to be fused into longer effective samples using the methods described here. a more difficult problem is that of forecasting transformed new daily covid-19 cases. such information would be of great interest, e.g., to those planning health provision over a short timescale. knowledge of the three cycles is helpful and we have had moderate success in forecasting daily cases. however, with individual country series, with smaller number of days, it is unrealistic to expect too much and, in particular, the transformed cycles experience both a degree of time-modulation and possible frequency changes. more useful perhaps, are not daily forecasts, but the knowledge that the number of cases will increase and decrease over a period of three/four days. this means that if one observes a decrease in the number of daily covid-19 cases after lockdown, that does not necessarily mean the peak has been reached, but is simply a manifestation of the 3/4 day cycles. hence, one might believe a lockdown strategy has been successful after a sustained decrease of at least four days. spectral analysis [1, 5] of epidemics is not new, but most work has been carried out on epidemics observed over long time periods (seasons and years) using lengthy time series [6, 7, 8] . recent work [9] on covid-19 has applied popular autoregressive integrated moving average process [10, 1] models to a single prevalence time series with a sample size of n = 22. however, conclusions derived from such analyses on a single series with such small sample sizes [11] are questionable. for example, an autoregressive process of order one with parameter 0.9, normally considered to be a strong signal, is only distinguishable from white noise [12] approximately 20% of the time with sample size of n = 22; basic simulation studies show the large number of possible different models that can fit such short series apparently well. this indicates that it is virtually impossible to tie down the correct model with such a small sample size. phenomenological sub-epidemic models [13, 14] show much more promise and have been applied with some success to short-term forecasting of covid-19 cases in guangdong and zhejiang, china. these improve performance by using bootstrap methods on short case time series, but are still ultimately based on a parametric model of single series. our work is very different as it provides exceptionally accurate spectral estimates for a novel live epidemic that is still in its early days on short series, but reliably so by using recent bayesian spectral fusion techniques. [3] . the nonparametric nature of our analysis also permits us to split case time series at a boundary (e.g. lockdown or other intervention) and analyse the two halves separately, still with very short series in each. this is perhaps harder to do with classical parametric models and to maintain consistency between the two halves. on the other hand, our method relies on good quality case series from different regions, which is again not always the case for all epidemics. we transformed the number of new daily covid-19 cases by applying a signed log transform to the first differences of the new case time series (see methods). the transformed number of new daily cases for 16 countries are shown in figure 1 each showing a distorted noisy, but characteristic sinusoidal trace. the estimated log-spectrum for the uk transformed new daily cases is shown in figure 2 and for all other countries we analysed in the extended data figures. spectral estimates are commonly displayed on a logarithmic scale [16] . spectral peaks can be observed at wavelengths of 6.7, 3.2 and 2.3 days, respectively. although the peaks are visible, the credible intervals indicate that there is a fairly large degree of uncertainty, because this time series contains 52 observations. a frequentist analysis, e.g. using the spectrum function in r [16] , produces a similar result, but with even wider confidence bands. similar spectral analyses for each country indicate three similar spectral peaks, although not always as well-defined nor in precisely the same location. figure 3 shows an estimate that is the result of coherently fusing spectra from 18 countries, giving an an effective sample size of 916 days. here, the clear spectral peaks have narrow credible intervals, due to the large effective number of days afforded by using 18 countries together. the spectral peaks are located at wavelengths of 6.7, 4.1 and 2.7 days. the peak around 6.7 days is observed in the spectral plots for individual countries and we interpret it to be a weekly effect. such a weekly effect could be produced by reporting artefacts (e.g. paperwork being delayed until monday, or carried out differently at the weekend) or due to the behaviour differences of people at weekends. all countries analysed have a 5+2 working week/weekend pattern, although not necessarily the same days of the week (the actual days for a weekend are a phase effect, which does not effect the spectrum). clustering spectra and groups of countries with similar spectra we next clustered our 18 countries based on their spectrum, by calculating a dissimilarity between the spectra for each pair of countries, and then performing both a hierarchical cluster analysis and multidimensional scaling on the dissimilarity matrix. the scaling solution indicated that only two dimensions were required to encapsulate 72% of variation in the data. figure 4 shows the resultant twodimensional solution. attaching a meaning to the scaling axes in figure 4 is not easy. we hypothesise that axis 1 might indicate how badly a country has been perceived to have been affected by the virus with australia, new zealand and sweden less so and those on the left of the plot considerably more so. however, germany is the obvious anomaly to this interpretation as, currently, it has perhaps been perceived to have handled the crisis well so far. table 1 : spectral peaks for the three country groups in units of days. the peaks in the second and third rows have been arbitrarily labelled as peak a. and b. figure 5 show the spectral estimates for the three groups of countries identified in figure 4 , using the clustering techniques mentioned in methods. the peak frequencies for each of these groups is listed in table 1 , which shows differences between them. however, each group possesses a possible weekly peak and higherfrequency peaks labelled a., of around three to four days, and b., around 2.6 days. many countries experiencing the covid-19 pandemic have instituted a lockdown procedure to dramatically reduce virus transmission. at the time of writing, these countries have observed new daily covid-19 cases for between 43 and 54 days. we assume that, on average, it takes about seven days for the virus to incubate, for a person to seek attention and then be tested positive for the sars-cov-2 coronavirus. for each country, the number of days prior to and after the lockdown (19, 26) . for some of these countries the lockdown was applied over a period of two of three days and we took the median of these as the lockdown start date. the number of days before and after the lockdown are, in each case, too small to carry out anything other than the most simplistic time series to maintain statistical reliability. in particular, a spectral estimate in this situation would be subject to a high degree of uncertainty. however, figure 6 shows our coherently fused spectral estimates [3] across these countries before and after the lockdown period, making use of 192 effective days prior to lockdown and 196 days afterwards. the weekly peak is clearly visible in both estimates. the second and third peaks (labelled a. and b. in table 1 ) are visible pre-lockdown, but have all but disappeared post-lockdown. the spectrum is flat in the location where peak a. was previously, and spectral power declines considerably, relatively, where peak b. was located previously. this result is particularly interesting as it suggests that peaks a. and b. have been disrupted by the lockdown. the weekly effect seems relatively unchanged by the lockdown, indicating that perhaps it was strongly driven by non-epidemic effects, such as recording/paperwork or bureaucracy caused by weekends. the post-lockdown spectrum is higher overall than the pre-lockdown spectrum, this is due to the larger variation associated with the larger number of cases identified during the progress of the epidemic. our transformation suppresses this variation, but does not remove it entirely. we have had varied success in forecasting daily cases using a sum of two timemodulated cosine waves model, described in methods, and more research is required. we used the nelder-mead [17] optimisation routine built into r [16] , with starting frequencies of 0.31 and 0.44 taken from our uk spectral estimate plots, and built the model on the transformed cases up to april 11th. after optimisation, the fitted model resulted in modified frequencies ofω 1 = 0.34,ω 2 = 0.45, close to the starting frequencies (the other estimated parameters wereα 1 = 1.28,α 2 = 0.27,φ 1 = −0.102,φ 2 = −0.074,μ 1 = 1.3,μ 2 = −0.731,p 1 = 0.21,p 2 = 0.75). figure 7 shows transformed new daily uk cases, the model fit and forecasts. the model fit does not look too bad, many spectral peaks are being identified, but perhaps the amplitudes of them could be better matched. the untransformed forecasts for april 12th, 13th, 14th and 15th were 5250, 5200, 6373 and 6164, all with approximate 95% confidence interval of ±150. the actual number of cases for april 12th turned out to be 5288. in this case, the forecast was good. however, the two-step ahead forecast of 5200 was wrongthe true value turned out to be 4342 on april 13th. we also used several stochastic forecasting methods based on autoregressive integrated moving average modelling and exponential smoothing, but nothing that we tried was particularly successful. the series is difficult as its amplitude/variance is not constant and we suspect that frequencies are changing over time (as, e.g., the lockdown plot figure 6 indicated). however, rather than point forecasts, the general sinusoidal nature of the transformed cases suggests a further, perhaps more reasonable strategy. at this stage, the uk government and media are looking expectantly at the daily case numbers to try and detect a sustained downward trend in cases. excitement has been generated by a drop in cases two days in a row. this happened on april 5th with 5903 cases, followed by a drop to 3802 and then 3634 on april 6th and 7th and then, unfortunately, increasing to 5491 on the 8th. however, the general sinusoidal patten, with a wavelengths of about 2.7 and four days shows that we should only perhaps start believing that a downward turn is a downward trend after a sustained decrease of four days or more. however, caution needs to be applied here as there is no guarantee that the dynamics will remain unchanged. we analysed numbers of deaths using similar methods described here and found similar cycles. although we have not carried out a detailed analysis, if the number of deaths process can be approximated by a linear system [1, 10] with the numbers of cases as input, then similar cycles are to be expected. a time series with a fixed sampling rate and length has a minimum and maximum (nyquist) frequency that can be observed. [1, 10] although our spectral fusion methods [3] provide more accurate estimates of the spectrum in the normal range (equivalent to having a larger sample size), they can not provide information on frequencies outside of the normal range. to estimate lower frequencies, we would need a genuinely longer series and, for higher frequencies, we would require cases more frequently than once a day, which are arguably not really necessary for any practical purpose. our analyses assume approximate stationarity and linearity for the transformed series, which is unlikely to be exactly true in practice. for example, in the uk transformed case series in figure 7 , there are hints of the series oscillation speeding up over the last ten days. practically speaking, changes in the testing regime, recording practices, the lockdowns or other interventions will change the dynamics of either the pandemic itself or recording of it. ideally, it would be of interest to use methods for non-stationary time series [18, 19] , but the current series available to us are far too short for such analyses. all computations were executed in r [16] and packages that are mentioned specifically below. let y t , for t = 1, . . . , n c represent the number of new daily cases for n c days for country c. the spectral dynamics of the number of daily cases for different countries are all countries masked by the well-known and characteristic exponential increases (and decrease, for those countries that locked down and have now passed their peak). hence, we transform our number of daily cases series to reveal the spectral dynamics. after exploration [10] the following transform was used for all series l t = sgn(d t ) log(|d t |), where the sign function sgn(x) is +1, if x is positive or −1, if x is negative, and d t = y t − y t−1 for t = 2, . . . , n c . the transform is easily inverted, which is essential for forecasting the number of daily cases. we use the regspec [3, 20] bayesian spectral estimation method with a neutral white noise prior with prior variance of 1 and all default arguments, except for a smoothing parameter of 0.7, although the results are not sensitive to the latter. regspec straightforwardly enables the production of spectral estimates using multiple data sets, with each having different lengths and produces coherent credible intervals to properly ascertain the uncertainty inherent in the estimation process. regspec can also fuse spectra for multiple series recorded at different sampling rates, but we do not need to use this aspect of its functionality here as all our case time series are reported daily. however, if a country decided to release case numbers on some other sampling plan (e.g. every two days, or weekly) then regspec would be able to fuse the spectral estimates as described here. such a feature might be of use when dealing with reporting structures that are not equipped to provide daily reporting of cases or where weekly cases are thought to be more accurate. for example, this might apply to regions with fragile health or reporting systems or populations that are spread across widely dispersed geographical regions with poor communications. although the number of cases transformed time series show similar spectral behaviour, it is possible to observe closer similarities within certain subgroups of countries. we used unsupervised clustering and scaling techniques [21, 22] to depict the relationship between different countries and suggest a clustering for them. first, for each country we produced a spectral estimate using regspec as mentioned above, and then formed a dissimilarity for each pair of countries by computing the euclidean distance between their spectral values (using the dist function in r [16] ). classical multidimensional scaling was then used to produce an estimated configuration using the cmdscale function in r [16] . for clustering we use hierarchical cluster analysis on the dissimilarity matrix we computed. it is well-known that dendrograms are sensitive to the input dissimilarity matrix, so we used the clusterwise cluster stability assessment by resampling method to produce a stable clustering [15] . given the form of the transformed new daily cases we propose a model, m t , that is the sum of two time-modulated cosine waves, m t = m where i = 1, 2 indexes the two waves and t = 1, . . . , n c . initial values for forecast model fitting we used α i = 0.8, φ i = 0, µ i = 0.1, p i = 0.5, for i = 1, 2. for model evaluation we put more weight on getting later observations correct and use a residual weight vector w t = (t/n) 2 where t = 1, . . . , n and n is the number of cases. for short term forecasting, we fit m(t) to the transformed daily cases by weighted least-squares using standard r [16] optimisation functions and then extrapolate m(t), using recent weighted residuals to estimate the forecasting error. the number of daily covid-19 cases for countries can be found at the website of the european centre for disease prevention and control [2] . spectral analysis and time series european centre for disease prevention and control, covid-19 cases worldwide should we sample a time series more frequently? decision support via multirate spectrum estimation (with discussion) we don't know if coronavirus deaths will peak this week spectra analysis for physical applications travelling waves and spatial hierarchies in measles epidemics seasonality and the persistence and invasion of measles the dynamics of measles in sub-saharan africa application of the arima model on the covid-19 epidemic dataset the analysis of time series: an introduction forecasting: principles and practice white noise testing using wavelets a novel sub-epidemic modeling framework for short-term forecasting epidemic waves short-term forecasts of the covid-19 epidemic in guangdong and zhejiang flexible procedures for clustering, r package version 2.2-5 r: a language and environment for statistical computing (r foundation for statistical computing a simplex algorithm for function minimization locally stationary processes a test for second-order stationarity and approximate confidence intervals for localized autocovariances for locally stationary time series non-parametric bayesian spectrum estimation for multirate data an introduction to multivariate analysis the elements of statistical learning key: cord-000614-gl9cjmno authors: pang, xinghuo; yang, peng; li, shuang; zhang, li; tian, lili; li, yang; liu, bo; zhang, yi; liu, baiwei; huang, ruogang; li, xinyu; wang, quanyi title: pandemic (h1n1) 2009 among quarantined close contacts, beijing, people’s republic of china date: 2011-10-17 journal: emerg infect dis doi: 10.3201/eid1710.101344 sha: doc_id: 614 cord_uid: gl9cjmno we estimated the attack rate of pandemic (h1n1) 2009 and assessed risk factors for infection among close contacts quarantined in beijing, people’s republic of china. the first 613 confirmed cases detected between may 16 and september 15, 2009, were investigated; 7,099 close contacts were located and quarantined. the attack rate of confirmed infection in close contacts was 2.4% overall, ranging from 0.9% among aircraft passengers to >5% among household members. risk factors for infection among close contacts were younger age, being a household member of an index case-patient, exposure during the index case-patient’s symptomatic phase, and longer exposure. among close contacts with positive test results at the start of quarantine, 17.2% had subclinical infection. having contact with a household member and younger age were the major risk factors for acquiring pandemic (h1n1) 2009 influenza virus infection. one person in 6 with confirmed pandemic (h1n1) 2009 was asymptomatic. i n early april 2009, human cases of infection with a novel infl uenza virus of swine origin, pandemic (h1n1) 2009 virus, were identifi ed in the united states and mexico, and this virus spread rapidly across the world (1-3). on june 11, 2009 , the world health organization raised the pandemic level to 6, the highest level for pandemic alert (4) . estimating attack rates is a major task in characterizing pandemic (h1n1) 2009. some studies have reported attack rates of pandemic (h1n1) 2009 among household members and aircraft passengers (5) (6) (7) . these studies suggested that the transmissibility of pandemic (h1n1) 2009 virus was low. these studies were conducted in outbreak settings, and attack rates were calculated on the basis of clinical diseases that included infl uenza-like illness (ili) or acute respiratory illness (ari) of close contacts rather than confi rmed infection with pandemic (h1n1) 2009 virus. in addition, in these studies only symptomatic index and secondary cases were included. although most infections of pandemic (h1n1) 2009 infl uenza virus produce ili or ari symptoms (8) (9) (10) (11) (12) , subclinical infection can occur and can change the estimate of attack rate. in addition, the infectivity of asymptomatic case-patients has not been clearly defi ned (13) . because of the high rates of illness and death among the initial case-patients with pandemic (h1n1) 2009 (14) , the chinese government decided to prevent and contain the rapid spread of disease through tracing and quarantine of persons who had close contact with persons with confi rmed cases of pandemic (h1n1) 2009. beijing, the capital of the people's republic of china, took strict containment and control measures through october 2009. the beijing municipal government implemented border entry screening, ili screening in hospitals, health follow-up of travelers from overseas, and quarantine and testing of close contacts to identify new introduction of cases and local transmission. public health workers conducted epidemiologic investigation of all index case-patients (including those with subclinical infections) and traced and quarantined close contacts whose residence was within the jurisdiction of beijing. we estimated the attack rate of pandemic (h1n1) 2009 virus infection and assessed risk factors or correlates for infection among different types of close contacts, including household members and aircraft passengers. in 2009, under the guidance of the beijing center for disease prevention and control (beijing cdc), a network of 55 collaborating laboratories was established to perform reverse transcription pcr testing to confi rm cases of pandemic (h1n1) 2009 (15) . the confi rmed cases included symptomatic and asymptomatic cases, and these cases were detected mainly by border entry screening, ili screening in hospitals, health follow-up of travelers from overseas, and quarantine and testing of close contacts. once confi rmed, index case-patients were immediately quarantined in designated hospitals to receive treatment while in isolation. all the confi rmed cases were required by law to be reported to beijing and local cdcs. from may through october 2009, a detailed epidemiologic investigation was conducted for each confi rmed case of pandemic (h1n1) 2009 (including symptomatic and asymptomatic cases) by beijing and local cdcs within 6 hours after confi rmation of infection. patients with confi rmed cases were interviewed about demographic characteristics, course of illness, travel and contact history, and information about close contacts. patients with confi rmed cases were categorized as having imported cases (travelers) and locally acquired cases (no travel history) on the basis of where the infection was acquired. close contacts were defi ned as anyone who ever came within 2 meters of an index case-patient without the use of effective personal protective equipment (ppe) (including masks and gloves, with or without gowns or goggles) during the presumed infectious period. trained staff from local cdcs made the determinations on the basis of fi eld investigation. the relationships of close contacts to index case-patients were categorized as 1) spouses, 2) other household members, 3) nonrelated roommates, 4) contacts at workplace or school, 5) nonhousehold relatives, 6) passengers on the same fl ight, 7) friends, and 8) service persons met at public places. a close contact on an aircraft was defi ned as a passenger sitting within 3 rows in front and 3 rows behind the index case-patient. all close contacts were traced and quarantined for 7 days after the most recent exposure to the index casepatient. all index case-patients detected between may 16 (the fi rst case, the date of confi rmation) and september 15, 2009 (before widespread transmission in beijing), and their close contacts were included in this study. we excluded cluster or outbreak cases for which close contacts could not be determined clearly by epidemiologic investigation (the transmission chain was obscure). for each close contact, before quarantine, a pharyngeal swab specimen was collected for reverse transcription pcr testing, regardless of symptoms. a second pharyngeal swab specimen was collected for testing for pandemic (h1n1) 2009 virus if any of the following symptoms developed in a close contact during quarantine: axillary temperature >37.3°c, cough, sore throat, nasal congestion, or rhinorrhea. data were analyzed by using spss version 11.5 (spss inc., chicago, il, usa). median and range values were calculated for continuous variables, and percentages were calculated for categorical variables. differences in attack rates were compared between subgroups of close contacts by using the χ 2 test. for the signifi cant difference found in multiple subgroups, this test does not enable identifi cation of which multiple subgroups are signifi cantly different, only that across all the subgroups there are differences. the variables with p<0.10 in χ 2 test were included in multivariate analysis. multivariate unconditional logistic regression analysis was conducted to determine risk factors associated with infection in close contacts. backward logistic regression was conducted by removing variables with p>0.10. odds ratios (ors) and 95% confi dence intervals were calculated for potential risk factors of infection. the hosmer-lemeshow goodness-of-fi t test was used to assess the model fi t for logistic regression. all statistical tests were 2-sided, and signifi cance was defi ned as p<0.05. a total of 613 eligible index case-patients, detected from may 16 through september 15, 2009, were included in this study. through fi eld epidemiologic investigations, 7,099 close contacts were traced and quarantined in beijing. the median number of close contacts per index case per day was 7.0 persons (range 2.0-95.0 persons); the median number for an imported index case was 7.0 persons (range 1.7-95.0 persons) and for a locally acquired index case was 5.3 persons (range 1.0-25.0 persons). for the 601 symptomatic index case-patients, the median interval between illness onset and sample collection was 1.0 days (range −1.9 to 7.0 days). among close contacts with symptomatic infection, the median interval between illness onset and sample collection was 0.5 days. more than 85% of close contacts were quarantined within 72 hours after interview of the index case-patients. the median interval between fi rst exposure and quarantine was 3.4 days for the close contacts, and it was shorter, on average, for fl ight passenger contacts than nonpassenger contacts (1.7 days vs. 3.8 days). for symptomatic close contacts infected with pandemic (h1n1) 2009, the median of generation time (i.e., the time from illness onset in an index case to illness onset in a secondary case) were 2.4 days; it was shorter for fl ight passenger contacts than nonpassenger contacts (1.6 days vs. 2.5 days) ( table 1) . approximately 43% of the index case-patients were women; the median age was 20 years, and 38% likely contracted pandemic (h1n1) 2009 virus locally because they had not traveled recently. among the index casepatientss, 2% had subclinical infection. only 18% of index case-patients had close contacts with confi rmed pandemic (h1n1) 2009 (table 2) , and the total number of close contacts who were infected by the virus from 110 index case-patients was 167. fifty percent (3,514 of 7,032) of close contacts were women, and the median age was 27 years. approximately 12% of close contacts were household member of index case-patients (spouse or other household member), and aircraft passengers accounted for 44% of close contacts. approximately 61% of close contacts were exposed to symptomatic index case-patients during their symptomatic phase. about 70% were quarantined in a quarantine station ( table 2 ). the overall attack rate for infection among close contacts (positive test result) was 2.4% (167 of 7,099), indicating that 1 index case-patient transmitted infection to 0.27 close contacts (167 of 613) on average (reproduction number = 0.27). among those close contacts with a positive test result, 14.4% (24 of 167) had subclinical infection; among the close contacts with positive test results at the start of quarantine, 17.2% (20 of 116) had subclinical infection. attack rates did not differ by index case-patient's sex (p = 0.225). however, attack rates differed signifi cantly by index case-patient's age (p = 0.022), and the lower attack rate was found for older index case-patients. there was no signifi cant difference in attack rates between close contacts of patients with imported cases and those with locally acquired cases (p = 0.282). no infection was found in close contacts exposed to index case-patients with subclinical infection, and the attack rate observed in close contacts exposed to symptomatic index case-patients during their symptomatic phase was higher (p<0.001). almost identical attack rates were found among male and female close contacts (p = 0.808). however, attack rates were signifi cantly different among different age groups of close contacts (p<0.001), and the lowest attack rate was found for those >50 years of age. the attack rates were signifi cantly different across 8 contact types (p<0.001). the attack rate was 5.3% among spouses and 6.6% among other family members in the household, and was lower among other types of close contacts ( table 3 ).the attack rate among passengers on the same fl ight was low, 0.9% overall, and 1 index case-patient transmitted infection to 0.19 close contacts on a fl ight on average (28 of 147), and the attack rate was higher among the passengers with longer fl ight times (>12 hours, p = 0.001). the attack rate among close contacts of service persons at public places was 0.2%, and 1 index case-patient transmitted infection to 0.01 close contacts of service persons on average (1 of 113). nonpassenger close contacts with longer exposure duration (>12 hours), compared with those with shorter duration (>12 hours), recorded the higher attack rate (p<0.001) ( table 3) . by multivariate analysis, age and type of contact were the major predictors of infection ( (or 3.42; p = 0.002) and 0-19 years of age (or 7.76; p< 0.001) were at higher risk for infection. other signifi cant independent risk factors associated with infection included being a household member of a person with an index case (or 3.83; p<0.001), being exposed to index case-patients during their symptomatic phase (or 1.86; p = 0.003), and exposure duration >12 hours (or 1.83; p = 0.002). similar risk factors were observed among aircraft passengers. we estimated that pandemic (h1n1) 2009 virus was transmitted by 18% of index case-patients to their close contacts and that 2.4% (167 of 7,099) of close contacts we traced were infected. our data indicate that pandemic (h1n1) 2009 virus has low transmissibility in nonoutbreak settings. we found that 1 index case-patient transmitted infection to 0.27 close contacts on average, i.e., reproduction number = 0.27. this fi nding suggests that among those quarantined index case-patients, the number of persons with secondary cases who could be traced through rigorous fi eld investigation was small and far less than the number needed for the sustainable transmission of infectious disease in the population (reproduction number >1). however, the fact that the pandemic eventually spread in beijing indicates that contact and case tracing were far from complete, especially later in the summer and early fall of 2009. the strict control measures may have worked to some extent at the beginning but were outpaced by local transmission (16) ; the percentage of locally acquired infections ranged from <10% in june 2009 to >80% in september 2009 (data not shown). in this study, the median number of close contacts per index case-patient per day was 7.0 persons. although locating and quarantining these close contacts was done quickly, and stringent quarantine measures were used, which hindered implementation of control measures, the real number of close contacts was unknown and probably exceeded this number. many close contacts were persons met in public places, including public transportation, theaters or cinemas, and shopping malls, and it is nearly impossible to trace all of the contacts. in addition, some persons who had worn ppe during contact with index casepatients were excluded from close contacts management (i.e., they were not quarantined), but because wearing ppe might not protect (or fully protect) against infection, some persons excluded might have become infected. in addition, many persons with mild and asymptomatic cases cannot be detected, but they may transmit the virus. furthermore, the short generation time of pandemic (h1n1) 2009 shown in this study and in a previous study (13) could lead to the rapid accumulation of infection sources and close contacts. this rapid compounding could overwhelm response capacity and would have resulted in compromised effectiveness of containment measures. it should also be mentioned that we did not include persons with cluster or outbreak cases for whom close contacts could not be determined clearly by epidemiologic investigation to examine the basic feature of pandemic (h1n1) 2009 (e.g., generation time), and the reproduction number obtained from our data is an underestimate. attack rates of infection differed signifi cantly by contact type. among household members of index casepatients, the attack rate was the highest, as shown in the multivariate analysis after controlling for age and other factors. the most likely reason for this fi nding is that household members are more likely to have come into closer contact with index case-patients for a longer period with shorter distance and longer duration. another possible reason is that household members may have some certain linkage with index cases in genetic susceptibility or living habits that would cause higher predisposition in household members than in other close contacts. this fi nding is similar to fi ndings in other investigations of respiratory infectious disease (17) . close contacts on fl ights accounted for the highest proportion of all the close contacts, in part because of how the index cases were detected and the broad defi nition we used for close contacts. however, the attack rate was much lower than that for other close contacts; 1 index case infected only 0.19 close contacts on fl ights on average. this fi nding indicated that the possibility of transmission of pandemic (h1n1) 2009 virus on fl ights was low, and the yield of tracing and quarantining of close contacts on fl ights was limited. tracing contacts of service persons at public places was more diffi cult than tracing other categories of contacts, and the lowest attack rate (0.2%) was recorded in this category. despite extensive measures, on average, only 0.01 infected close contacts per index case-patient were identifi ed among service persons. tracing the contacts of service persons at public places seems far less cost-effective. criteria for close contacts on fl ights and those of service persons should be refi ned with respect to exposure duration and age of those exposed. exposure to index case-patients for >12 hours was a signifi cant independent risk factor for infection in fl ight passenger contacts. this fi nding suggests that limiting the time of contact with persons with ili on aircraft can reduce risk for transmission, and a long duration of exposure may be necessary for transmission to occur on aircraft. younger close contacts were at higher risk for infection than older ones. the possible reason was that younger persons had much closer contact with index case-patients than did older persons; another reason may be that younger persons were more susceptible to infection with pandemic (h1n1) 2009 virus (18) . this fi nding was consistent with fi ndings reported in other studies (5, 6) . no secondary cases were found among close contacts exposed to index case-patients with subclinical infection. the attack rate among close contacts who were exposed to symptomatic index case-patients during their symptomatic phase was much higher than that among those exposed to these case-patients before their illness onset. exposure to index case-patients during the symptomatic phase was a signifi cant independent risk factor for infection among close contacts. these fi ndings indicate that the infectivity of pandemic (h1n1) 2009 virus was higher after illness onset, and that the infectivity of symptomatic pandemic (h1n1) 2009 case-patients before illness onset was higher than that of persons with subclinical cases, although persons in each group were asymptomatic when in contact with other persons. in general, the earliest infectious time for pandemic (h1n1) 2009 was considered as 1 day before illness onset (19) . we found that index case-patients and infected close contacts shed pandemic (h1n1) 2009 virus <1 day before illness onset, which suggests that the infectious period of symptomatic persons with pandemic (h1n1) 2009 might be <1 day before illness onset. among close contacts with pandemic (h1n1) 2009, ≈14.4% were asymptomatic. it is noteworthy that specimens from some close contacts tested negative for pandemic (h1n1) 2009 virus before quarantine, but those persons could shed the virus during quarantine without symptoms. such infection could not be detected, and the proportion of subclinical infection was underestimated. therefore, we calculated the proportion of subclinical infection by cross-sectional analysis of the subclinical infection of close contacts before quarantine, and we found that ≈17% of case-patients with pandemic (h1n1) 2009 were asymptomatic. this study has several limitations. we could not fi nd all close contacts of persons with pandemic (h1n1) 2009 and did not know their infection status, so the infection parameters of pandemic (h1n1) 2009 that we found in this study might not be precise, especially for reproduction number, which may be underestimated to some extent. furthermore, we could not exclude the possibility that the infected close contacts had been infected from another unknown source before quarantine started, which might infl uence our conclusion to some extent. in summary, the attack rate among close contacts was low, even among household contacts. household member and younger age were the major risk factors for infection with pandemic (h1n1) 2009 virus among close contacts. approximately 17% of cases of pandemic (h1n1) 2009 were asymptomatic. table 2 were included in multivariate unconditional logistic regression analysis. hosmer-lemeshow goodness-of-fit test was used to assess the model fit for logistic regression. or, odd ratio; ci, confidence interval; na, not available, indicating not included in the final model. †one dependent variable (infection with pandemic [h1n1] 2009 virus) and 5 independent variables (age of index case-patient, type of exposure to index case-patients, age of close contacts, relationships to index case-patients, and exposure duration of close contacts) were included in multivariate analysis. one independent variable (age of index case-patient) was removed in the stepwise regression equation. the goodness-of-fit test suggested that the logistic regression model fitted well (p = 0.631). ‡one dependent variable (infection with pandemic [h1n1] 2009 virus) and 4 independent variables (age of index case-patient, type of exposure to index case-patient, age of close contacts, and exposure duration of close contacts) were included in multivariate analysis. two independent variables (age of index case-patient and type of exposure to index case-patient) were removed in the stepwise regression equation. the goodness-of-fit test suggested that the logistic regression model fitted well (p = 0.982). §one dependent variable (infection with pandemic [h1n1] 2009 virus) and 5 independent variables (age of index case-patient, type of exposure to index case-patient, age of close contacts, relationships to index case-patient, and exposure duration of close contacts) were included in multivariate analysis. two independent variables (age of index case-patient and exposure duration of close contacts) were removed in the stepwise regression equation. the goodness-of-fit test suggested the logistic regression model fitted well (p = 0.751). ¶exposed to symptomatic index case-patients before their illness onset or exposed to index case-patients who had subclinical infections. pneumonia and respiratory failure from swine-origin infl uenza a (h1n1) in mexico swine infl uenza a (h1n1) infection in two children-southern california pandemic potential of a strain of infl uenza a (h1n1): early fi ndings world health organization. world now at the start of 2009 infl uenza pandemic household transmission of 2009 infl uenza a (h1n1) virus after a school-based outbreak household transmission of 2009 pandemic infl uenza a (h1n1) virus in the united states transmission of pandemic a/h1n1 2009 infl uenza on passenger aircraft: retrospective cohort study novel swine-origin infl uenza a (h1n1) virus investigation team emergence of a novel swine-origin infl uenza a (h1n1) virus in humans clinical features of the initial cases of 2009 pandemic infl uenza a (h1n1) virus infection in china subclinical infection with the novel infl uenza a (h1n1) virus shedding and transmission of novel infl uenza virus a/ h1n1 infection in households-germany clinical and epidemiologic characteristics of 3 early cases of infl uenza a pandemic (h1n1) 2009 virus infection, people's republic of china european centre for disease prevention and control. ecdc risk assessment: pandemic h1n1 outbreak of swineorigin infl uenza a (h1n1) virus infection-mexico severe, critical and fatal cases of 2009 h1n1 infl uenza in china alternative epidemic of different types of infl uenza in 2009-2010 infl uenza season, china evaluation of control measures implemented in the severe acute respiratory syndrome outbreak in beijing incidence of 2009 pandemic infl uenza a h1n1 infection in england: a cross-sectional serological study interim guidance for emergency medical services (ems) systems and 9-1-1 public safety answering points (psaps) for management of patients with confi rmed or suspected swine origin infl uenza a (h1n1) infection we thank fujie xu for her suggestions and comments. all material published in emerging infectious diseases is in the public domain and may be used and reprinted without special permission; proper citation, however, is required. key: cord-245047-d81cf3ms authors: gupta, sourendu title: epidemic parameters for covid-19 in several regions of india date: 2020-05-18 journal: nan doi: nan sha: doc_id: 245047 cord_uid: d81cf3ms bayesian analysis of publicly available time series of cases and fatalities in different geographical regions of india during april 2020 is reported. it is found that the initial apparent rapid growthin infections could be partly due to confounding factors such as initial rapid ramp-up of disease surveillance. a brief discussion is given of the fallacies which arise if this possibility is neglected. the growth after april 10 is consistent with a time independent but region dependent exponential. from this, r0 is extracted using both known cases and fatalities. the two estimates are seen to agree in many cases; for these cfr is reported. it is seen that cfr and r0 increase together. some public health implications of this observation are discussed, including a target doubling interval if medical facilities are to remain adequate. sars-cov-2 is a virus which has newly entered the global human population [1] . as this host-parasite system evolves towards an equilibrium, its epidemiology has been studied extensively, but with some conflicting results [2, 3] . the true extent of its penetration into the population is as yet open to question [4] , since testing is fairly restricted in most countries. nor is the progression of the disease, covid-19, or its method of spread completely clear [5] . since the virus is already so widely established, it seems unlikely that it will be totally eliminated soon. so it is important to extract basic epidemiological parameters as cleanly as possible. india has managed to geographically contain the spread of the covid-19 epidemic with the nation-wide lock-down which started on 24 march, 2020. at the end of april the proportion of identified cases in india as a whole was a few tens per million, with 1-2 orders of magnitude more in hot spots. even if this were wrong by an order of magnitude, it would still mean that the epidemic remains at an early stage in india. this, combined with the lock-down, presents an opportunity to examine the growth of the epidemic in multiple isolated regions which implement essentially the same policy with regard to testing. this study examines the heterogeneity in the growth rate of the disease, in several ways. first, the doubling intervals, τ , of the cumulative number of identified cases, c(t), and the cumulative number of fatalities, d(t), is examined. from τ it is possible to extract the basic reproduction rate, r 0 , within epidemic models. marked heterogeneity are observed. after this the correlation between the case fatality ratio, cfr, and r 0 is studied. epidemic data, especially at the beginning is never clean. the public health system has to gear up for disease surveillance. the continuing recurrence of cholera epidemics [6] , the spread of dengue and chikungunya [7] , the successful surveillance and elimination of nipah [8] and zika [9] , show that india has a mixed record on epidemic surveillance. in addition to a possible lag between the beginning of the epidemic and its surveillance, there could be a problem of incomplete surveillance during the time the health service ramps up. any examination of data has to allow for the identification of confounding factors such as these. for the covid-19 surveillance data, there are further cautionary remarks. the icmr guidelines for testing [10] specify that only symptomatic cases should be tested using rrt-pcr. this part of the policy has been unchanged since the middle of march. depending on the fraction of cases which are symptomatic, this could miss the actual prevalence of the disease in the population. estimates of the fraction of asymptomatic infections range as high as 80% [11] , implying that, in this extreme case, the testing policy can never reveal more than 20% of the cases. the social stigma attached to covid-19 [12] also means that some fraction of infections may be cryptic. there are uncertainties in the statistics of fatalities also. it has been reported that in europe and the us the number of fatalities due to covid-19 may have been underestimated by a factor of 2-3. indian cities have fairly complete registries of deaths, so miscounting of covid-19 fatalities could come mainly from mistaken or incomplete reports of the cause of death. for larger regions, say districts and whole states, where most deaths happen at home and death certificates are not common [13] the errors in counting fatalities may be significantly larger, and hard to estimate at this time. one point about the quality test that is developed here is that absolute numbers are not as important for it as the check that fatalities and identified cases are independently tracing the same rate of growth of the epidemic. this is expected at the beginning of the epidemic, when all epidemic models become linear, and the growth of generic measures is driven by the maximum eigenvalue of the linearized models. however, in the extraction of the cfr, the absolute counts do matter. in spite of the uncertainties, the correlation of cfr and r 0 holds important lessons for public health in the inevitable later stages of this epidemic in india, and the middle and low income countries of the world. data has been extracted from official sources where possible. for ahmedabad city, the data is made publicly available by the municipal council of the city [14] . this well-organized site corrects data retrospectively for up to about 10 days. for chennai city, the data has been collected from daily tweets by the municipal council [15] . for indore city, the data was collected from daily bulletins of the chief medical and health officer and the collection is available for public use [16] . for mumbai city the data has been collated from the daily tweets by mcgm [17] into a publicly available site [18] . for pune district the data was collated from the daily tweets by the district authority [19] and the collection is publicly available [20] . for delhi and all other states, the data was taken from the publicly available collection at covid19india [21] . this site corrects data retrospectively for over a week. only data on the cumulative number of identified cases, c(t), and the cumulative number of known fatalities, d(t), are used in this analysis. for this work data collection stopped on may 1, 2020, and retrospective corrections made after this are not included. the unquantifiable parts of the errors in the counts of cases and fatalities due to covid-19 were discussed in the previous section, along with the reasons why their estimates need not be included in this analysis. however, there is another part of the errors in the daily counts of cases and fatalities which come from backlogs of tests or hospital records. these shuffle a fraction of the numbers from one day to another, and therefore cause errors in the daily counts. as long as the number of facilities keep pace with the growth of the epidemic, these errors remain proportional to the number of cases and fatalities. since the wait time for hospital beds for covid-19 cases has remained roughly constant during the period of study, this argument is expected to hold. in view of this, of 20% of the reported values of c(t) and d(t) are assigned as errors. the specific fraction, 20%, was chosen to in order to cover the long range fluctuations visible in the time series (for example in those visible on days 3 and 13 of figure 1 ). it has been seen that official reports and independent estimates of these number are generally within this range. at such an early stage in the infection, it is reasonable to assume exponential growth, i.e., doubling every τ days. within this assumption one can check how well the lock-down is working by letting the doubling interval become time dependent. the simplest function to try is a linear change in τ , i.e., and a similar set of three parameters for c(t). note that τ 0 has dimensions of time, whereas τ 1 is dimensionless. a fitting form with constant doubling interval was also used; this is denoted τ , dropping the subscript. the fitting procedure follows the methods of [23] , with gamma distributions used as prior probability distribution functions (pdfs) for τ 0 and c 0 . the additional parameter τ 1 is allowed to take positive and negative values, by letting the prior pdf to be a gaussian. for all these distributions, the widths are taken large enough that the posterior distribution is insensitive to the choice of priors. the appendix contains details of the relation between a time varying doubling interval and time variation of the basic reproductive rate r 0 . this requires choosing a model of the epidemic. using the seir model, and the median interval between the appearance of symptoms and the time of fatalities, t 2 = 17.8 days [22] , one has when a constant τ is used, one can set τ 1 = 0 in the above formulae and write r 0 and τ instead of r 0 0 and τ 0 . exactly the same procedure is followed for fits to the time series for d(t). estimates of the median values of the parameters, along with interquartile ranges (iqr) and 95% credible intervals (cri) are quoted for the doubling intervals as well as r 0 . the analysis of the time series for c(t) and d(t) lead quite naturally to the case fatality ratio, cfr. this is defined as the ratio cfr = d(t)/c(t). ( if c(t) is underestimated, then cfr is overestimated, and conversely, when d(t) is underestimated, then cfr is also underestimated. this was regulated using a bayesian estimator. since the outcome is binomial, the prior pdf used is a beta distribution with α = 1 and β = 2. these choices make the posterior distribution insensitive to doubling or halving the values of the priors. the posterior distribution is of the same form with α = 1 + d(t) and β = 2 + c(t) − d(t), with t taken to be the final day of the analysis. since c(t) and d(t) are both large, the following approximations for the median, µ, and standard deviation, σ, may be used: iii. results the time series of c(t) and d(t) is shown for the example of delhi in figure 1 . of the regions that we analysed, most cities show an initial rapid growth followed by a tempered growth. the exceptions are ahmedabad and chennai among cities, and the states of gujarat, kerala, and west bengal. note that day one is taken to be march 31, 2020, which is 7 days after the beginning of the national lock-down. since t 2 = 17.8 days, it might be expected that the growth rate of cases in the pre-lock-down period could manifest itself in that of fatalities until around day 11. in case of a successful lock-down, d(t) could then show an initial exponential growth, tempered after day 11. the initial data for fatalities in delhi, indore, mumbai and pune can indeed be described by an exponential. however, the doubling interval in pune turns out to be half of that in mumbai, although the average population density of mumbai is about 6 times larger than the average in pune city. the ansatz of eq. (1), i.e., a linearly varying doubling interval, was also examined for urban regions. the results are collected in table i . in most locations the initial doubling interval seems to be between half and day and two days. when converted to r 0 , one obtains extremely high values, far in excess of what has been quoted in the literature. certainly r 0 could vary from place to place, since it depends on infectivity of the virus as well as the social networks in each location, and the latter may change from one place to another. however, τ 0 for pune is one third that of mumbai, when mumbai has six times the average population density. the wrong dependence of the doubling time for fatalities on population density, together with the observation that c(t) shows a growth till the same date, supports the idea that there could be a more parsimonious explanation for this common period of growth. this is discussed in the next section. at the moment, any statistical evidence for a gradual slowing of the growth rate of the epidemic is hidden due to some confounding factors. in view of this, the analysis was continued with a constant doubling interval, τ , applying it to the period after day 10 or 11. for this part of the analysis data was from three states, namely gujarat, kerala, and west bengal, was also used. from figure 1 , one sees that this simpler model provides as good a description of the data as the model of eq. (1). furthermore, this yields more realistic values of r 0 implies that during the lock-down each of these places has seen a location dependent constant doubling interval. the values of τ , along with inferred values of r 0 , are collected in table ii . these are the primary results of this analysis. it was noted that the number of known cases, c(t), is definitely missing cases among those who have not been tested. this could include a possibly large, fraction of asymptomatic and non-critical or pre-symptomatic cases [24, 25] . however, india's disease surveillance mechanism has concentrated on identifying critical cases and contact tracing, which could be a good tracer of the growth of epidemics. if this reasoning is correct, then, during the early growth of the epidemic, one should be able to obtain reliable doubling intervals from the cumulative counts of test positives [23] . the results of this analysis are also given in table ii . the two independent estimates of r 0 agree well enough that a closer look reveals interesting patterns. the scatter plot in figure 2 of r 0 obtained in two different ways shows several interesting patterns. first, there seem to be two groups of outbreaks. most regions have r 0 below 3. among the regions that we studied, ahmedabad and gujarat were a separate group, which saw a faster epidemic growth, with r 0 above 3. finally there is kerala, a different outlier, whose doubling intervals are longer than t 2 , and therefore with very low values of r 0 . the case of kerala merits a separate remark. the cumulative number of fatalities reached 4 at the end of the period of study. with such low counts of fatalities the assumption of exponential growth cannot be well tested. the counts of total infections was larger, and supported the hypothesis of exponential growth over the period studied. second, one sees that most estimates lie close to the diagonal line. if the data was perfect, and the epidemic grew steadily, the estimates would lie exactly on this line. with this requirement we can separate the regions into two groups. one consisting of ahmedabad, chennai, delhi, gujarat, and west bengal are, within statistical uncertainties, on this line. the second group, with indore, kerala, mumbai, and pune, are not. this could indicate some issues with the data. on the other hand, if the data is as good as the other regions, then the fact that they are off the diagonal line should be understood. kerala, which is the only region which lies below the diagonal, is perhaps seeing a lower growth in new cases than fatalities, which could be indicative of a gradual slowing down of the epidemic. due to the lag by t 2 , fatalities would see the slowing down later. conversely, the regions which lie above the diagonal (namely indore, mumbai, and pune, and, possibly, chennai) could be seeing an increased growth in infections, not yet visible in fatalities because of the same time lag. whether these scenarios are true, or the data quality is not dependable, should be known to the health agencies now, and would become visible to the public later. when there is a statistically significant difference between the doubling interval determined by c(t) and d(t), then the ratio gives a time-dependent cfr. this is usually understood to be a transient phenomenon. in view of this, the analysis was restricted to ahmedabad, chennai, delhi, gujarat, kerala, and west bengal, i.e., the regions which lie on the diagonal line of figure 2 , and therefore are seeing a steady growth of identified cases as well as fatalities. the case fatality ratios for these regions are plotted against the r 0 inferred from d(t) in figure 3 . the most obvious trend is that for the group of three cities there is an overall trend towards smaller cfr with decreasing r 0 . this is also true of the two states. however, the cfr for states is displaced upwards from that for cities. both trends have strong implications for the public health outlook and will be discussed further in the next section. counts of known cases and fatalities of covid-19 from five cities (ahmedabad, chennai, delhi, indore, and mumbai), one district (pune), and three states (gujarat, kerala and west bengal) was investigated in this work. in two of the groups, there was one case each where the epidemic was not severe at the end of april (chennai among cities, and kerala among states). the others were known hot spots. kerala is special because the number of fatalities is too low for statistical tests to be meaningful. there are strong regional heterogeneity in the course of the epidemic, indicating the necessity of looking at its spread at extremely local scales in order to check and control it. the time series both of known cases and fatalities in four out of the six urban centers showed a rapid rise for about 18 days after the lock-down. this was followed by a much slower growth. since fatalities track cases with a delay of 17.8 days on the average, the early part of this data could track the growth in the time before the lock-down. however, it turns out that the data grows faster in less dense urban areas. moreover, this hypothesis is not tenable for the growth in the number of known cases. a possibility which resolves these difficulties is that this rapid rise of numbers in the early days tracks the rapid improvement of disease surveillance rather than the epidemic. the fact that the positive cases in kerala does not show such a rapid initial growth is consistent with reports that the state activated disease surveillance after the first infections came from abroad [26] . this could also be true of ahmedabad and gujarat, two other centers which show no such initial increase, since the state had passed through the surveillance challenge of zika virus in recent years [9] . due to this confounding factor, it is not possible to use the data until april 10 or 11 to make any statistically valid measurement of the growth of the epidemic before the lock-down. neglecting this leads to multiple fallacies, which i remark on next. the apparent slowing down of the growth in later stages may be falsely interpreted as a transition to polynomial growth. as shown in eq. (a5) and eq. (a7), this is equivalent to a time dependent doubling interval. it has been discussed in the previous subsection that this leads to highly unlikely properties of the covid-19 epidemic. the same apparent slowing down of the growth rate in india has also been interpreted within the homogeneous sir model with constant, time invariant, parameters [27] . in such a simple model the time dependence can only come from early evolution towards herd immunity. this gives rise to the unlikely conclusion that herd immunity will be reached for covid-19 while 99% of the population remains susceptible. a misrepresentation of data also arises when "instantaneous doubling intervals" or similar measures of exponential growth are constructed using c(t) for one day, or averaged over small windows of time [28] . this shows a spurious gradual slowing of growth during the first three weeks of the lock-down. in later weeks these estimates are also plagued by spurious effects which result when delayed reports are dumped into cumulative numbers on one day instead of being assigned to correct past dates. these appear as evidence of local spurts or slumps in growth. evidence of retroactive corrections from [14] shows that delays of as much as ten days may occur. when these artifacts are averaged over a moving window, this gives the mistaken appearance of peaks and troughs, and may put erroneous pressure to change policies. due to the reasons discussed in the previous subsections, the period after april 10 or 11 constitutes the base data for the main part of this analysis. as shown in figure 1 a constant growth rate in each locality during the the lockdown models the data as well as a growth rate which changes linearly with time. this is also the most parsimonious hypothesis about the growth of the epidemic. the observed doubling interval, and the derived quantity r 0 , fall into three groups (see table ii and figure 2 ). several geographical regions have r 0 less than 3. kerala has r 0 ≃ 1.7 (a doubling interval larger than t 2 , the interval from the emergence of symptoms to death). gujarat and ahmedabad have r 0 higher than 3. since this is the growth rate during the lock-down, population density effects are unlikely to be the major determinant of r 0 . it would be worthwhile to consider the role of individuals with extremely large number of contacts in this context, or a significant tail of the distribution with small number of contacts, but still above three. five regions pass the following data quality test-the value of r 0 obtained from the growth of fatalities and cases are equal. this does not mean that the number of cases is correctly counted. rather it indicates that the effort to find the cases requiring critical care, and tracing their contacts has successfully resulted in tracking a constant fraction of all infected persons. it may miss, for example, a large fraction of asymptomatic cases. for the five geographical regions which pass the quality test described in the previous section, a further study was performed. the dependence of the case fatality ratio, cfr (i.e., the ratio of the observed number of fatalities and cases) on r 0 was investigated. although the number of cases identified may be much smaller than the actual number of cases, the chance that cases are identified in these five regions are expected to be similar, since the rate of testing is about the same. a positive correlation between r 0 and cfr is observed. one possible reason for this is that with lower r 0 the number of critical cases grows slower, giving medical practitioners time to figure out good practices which prevent critical care patients from progressing to fatality. deeper studies of this factor, comparing case data from different regions, is called for in future. it is possible that this is one of the most positive, and least discussed, outcomes of the lock-down. another possibility may also be conjectured. careful maintenance of social distancing, necessary to reduce r 0 , results in evolutionary pressure on the virus. lock-down and similar methods force the virus to evolve in a direction which maximizes its ability to reproduce, which it can do if the disease becomes less critical or asymptomatic, and the chances of fatality decrease. it would be interesting to compare different regions across the globe for changes in the serial time and cfr. at the observed rate of growth, and with the current rate of testing, more than 0.5% of the population in hot spots will begin to test positive for infections in about a month. a constant rate of growth of infections means that the number of hospital beds will also grow at the same rate, for as long as the epidemic is growing. even if the rate is slowed down heavily, as it is already in delhi, mumbai, and chennai, the demand for hospital facilities will keep on growing, as long as the epidemic grows. this demand is already beginning to outstrip resources in the larger cities. the mean interval between the start of symptoms and discharge was estimated to be 24.7 days [22] . this means that unless the doubling interval is kept above 35 days (= 24.7/ ln 2), the demand on hospitals will keep rising. of the places we studied, only kerala has begun to approach this break-even point. cfr is currently small, partly because medical facilities have been able to cope with the rate of growth. if the number of cases exceeds the capacity of the medical system, cases which might have recovered will be harder to treat. inevitably in such cases cfr will climb. it is useful to note that in figure 3 the statewide figures for cfr are higher than those for cities. this is a reflection of the relative paucity of medical services outside cities, and is a pointer to what might happen when the number of infections rises beyond the sustainable capacity of hospitals. i thank rahul banerjee, prahlad harsha, d. indumathi, and r. shankar for sharing collated data on various cities. i thank jayasree subramanian for providing me with the reference [14] . in this form of the equation time is measured in units of the case resolution time. this equation assumes that the fraction of susceptible persons is close to unity, and the fraction of persons in any other compartment is very small. as argued before, this is a reasonable assumption to make. the cumulative number of infections is then found by integration. there is no closed form result for the general case. if only the linear term in the expansion of r 0 is retained, then the function erfi is defined through the integral it is possible to use an expansion for t ≪ t , which gives the form i(t) = i(0) 1 λ 0 e λ0t 1 + ǫ(1 − λ 0 t + λ 2 0 t 2 ) + o(ǫ 2 ) (a5) where the notation λ 0 = r 0 0 − 1, and ǫ = r 1 0 /(λ 2 0 t ) are introduced. the imaginary part vanishes exponentially. this is easy to match to the phenomenological form i(t) = i(0) 2 t/(τ0+τ1t/t ′ ) = i(0) 2 t/τ0 1 − ln 2τ 1 where an artificial expansion parameter t ′ is introduced. it is set to unity after expansion. matching these two expansions is accurate only when λ 0 t is large. then the phenomenological parametrization of eq. (a2) can be connected to the parameters of (non-autonomous) evolution equations for the epidemic. note that t and t ′ are both regularization scales, in the sense of a renormalization group, whose numerical value need not be specified. in order to change units of time to days, it is necessary to choose a model of the epidemic. if one uses the seir model, then the unit of time would be the median interval between the appearance of symptoms and the time of fatality or recovery, whichever is earlier. this quantity, t 2 = 17.8 days [22] . if one instead uses the sir model, then it is appropriate to choose the unit of time to be the median interval between the beginning of the infection and the earlier of the time of fatality or recovery. this is t 1 + t 2 , where t 1 is the median pre-symptomatic period, t 1 = 5.1 days. here the conversion is made within the seir scheme. this gives when a constant τ is used, one can set τ 1 = 0 in the above formulae and write r 0 and τ instead of r 0 0 and τ 0 . evolutionary origins of the sars-cov-2 sarbecovirus lineage responsible for the covid-19 pandemic changes in contact patterns shape the dynamics of the covid-19 outbreak in china effect of non-pharmaceutical interventions to contain covid-19 in china first antibody surveys draw fire for quality, bias coronavirus disease 2019 (covid-19): a literature review identification of burden hotspots and risk factors for cholera in india: an observational study emergencies preparedness, response who, zika virus infection -india, emergencies preparedness, response strategy for covid19 testing in india global covid-19 case fatality rates ministry of health and family welfare, government of india, addressing social stigma associated with covid-19 undated advisory nationwide mortality studies to quantify causes of death: relevant lessons form india's million death study amdavad municipal corporation, covid-19 website greater chennai corporation, official twitter page collection of press releases by praveen jadiya, chief medical and health officer municipal commission of greater mumbai, health department, official handle district information office, pune, official twitter account covid-19 india api estimates of the severity of coronavirus disease 2019: a model based analysis inferring epidemic parameters for covid-19 from fatality counts in mumbai estimating the asymptomatic proportion of coronavirus disease 2019 (covid-19) cases on board the diamond princess cruise ship estimation of the asymptomatic ratio of novel coronavirus infections (covid-19) coronavirus: surveillance is the key, kerala shows the way singapore univ. of tech. and design, data driven innovation lab, predictive monitoring of covid-19 coronavirus (covid-19) cases our world in data in this appendix the unit of time will be taken to be the inverse of the mean rate of fatality of the infected. in these units, r 0 is the average number of new infections caused by an infected person. r 0 depends on the infectivity of the virus, as well as an average degree of the contact network. as a result, it may be affect by public health policies, such as a lock down. say a policy measure has a time scale is t . due to this, r 0 may become time-dependent, and one may write a taylor series expansionone may introduce this into a typical epidemic model equation, to obtain di dt = (r 0 − 1)i, which gives log i(t) i(0) = t r 0 0 − 1 + key: cord-018792-oqwbmyft authors: ammon, andrea; sasse, julia; riedmann, klaus title: early disease management strategies in case of a smallpox outbreak date: 2007 journal: poxviruses doi: 10.1007/978-3-7643-7557-7_20 sha: doc_id: 18792 cord_uid: oqwbmyft as a consequence of the threat of smallpox being potentially used as a means of bioterrorism, many countries have developed preparedness plans for smallpox in the past few years. this chapter summarizes some of the most important issues for the management of smallpox. usually, the strategy for the management of clinical cases of poxviruses includes the early detection of cases, rapid laboratory diagnosis, an assessment of the risk of further spread and containment measures. for the early detection, different systems are being tested to identify suspected cases before a diagnosis is confirmed (e.g., syndromic surveillance). also it is necessary to provide special training on the disease pattern, including differential diagnosis, to clinicians and practitioners. if a suspected case has been identified, rapid diagnostic tests are required. in addition to the national and international notifications based on given case definitions, certain measures are necessary to allow an initial risk assessment of the epidemic development. for a rapid risk assessment, the investigations should follow the algorithms of epidemiological outbreak investigation such as the tracing and identification of exposed contacts and the sources of infection. further decisions have to be taken on the basis of a continuous risk assessment. countermeasures can be divided into medical and non-medical ones. the choice of an adequate vaccination strategy as a medical countermeasure for the case of a re-emergence of smallpox very much depends on the epidemic scenario, and the general availability and quality of a vaccine. logistic aspects of the vaccination strategies have to be considered in preparedness planning (e.g., resources necessary for the implementation of mass vaccinations), and also the prioritization of groups to be vaccinated. in addition non-medical measures to prevent the spread of infection, such as the isolation of cases and quarantining of exposed persons (e.g., contact persons of confirmed cases) have to be foreseen. the effectiveness of other measures like prohibition of mass gatherings or closure of institutions is often assessed in the light of historical events. however, they have to be considered within today’s ethical and societal context, taking into account, in particular, the increased number of people who are immunocompromised. since our knowledge of how the virus would behave today is limited to extrapolations from historical data and is therefore imperfect, these measures are still under discussion. all relevant groups should be involved in exercises to assure the effective operation of the plan mainly regarding communication and cooperation. after the eradication of smallpox, it was possible to cease the most successful strategy against smallpox, namely vaccination. apart from rare events like the outbreaks of monkeypox in the democratic republic of congo or in the usa [1, 2] , there has been no need to think about the management of this disease anymore. however, the threat of smallpox being used as a means of bioterrorism has forced reconsideration of the need for smallpox vaccinations and other measures to manage potential cases or outbreaks of smallpox. in the past few years, many countries have developed preparedness plans for smallpox. in the following chapter we have tried to summarize some of the most important issues for the management of smallpox. a full description of all the necessary parts of the preparedness plans would go beyond the space available here. the strategy for the management of clinical cases of poxviruses (occurring sporadically or in outbreaks) usually includes the early detection of cases, rapid laboratory diagnosis, an assessment of the risk of further spread and containment measures. early detection of a first smallpox case will be crucial for a successful management of any new outbreak. the earlier anti-epidemic countermeasures are initiated, the more likely the epidemic can be controlled or prevented in time and casualties can be limited. conventional surveillance systems like epidemiological surveillance of a well-defined set of clinically suspected diseases or laboratory confirmed agents are important to monitor and control the occurrence of infectious diseases. yet, these systems usually detect outbreaks or unusual epidemic developments only with a certain time delay. therefore, planning considerations include concepts that identify an attack as early as possible [3] . among such systems are for example strategies to monitor the number of emergency department visits, over-the-counter medication sales or school absenteeism. also, environmental monitoring systems like air samplers, which permanently test the air for threat agents to detect a biological agent before it causes symptoms, have been suggested. since they only cover selected areas and have to be analyzed against a background noise, they do not necessarily guarantee a timely recognition of a biological threat [3] . after 11th september 2001, various models of syndromic surveillance have been established and tested in the united states for different syndromes (e.g., [4] ), but they also still need to prove their value in detecting a bioterrorist attack in a timely manner. most likely a deliberate release of smallpox would not be detected unless one or more human cases with clinical symptoms of the disease occurred. the early clinical detection of a smallpox case requires familiarity with the disease pattern. the number of the actually practicing physicians who have clinical experience with smallpox patients is decreasing, and it is therefore necessary to provide special training on the disease pattern, including differential diagnosis to clinicians and practitioners. the emergence of highly contagious diseases with high mortality and morbidity rates pose an immediate threat to public health and ask for a real time detection of the onset. as a separate chapter in this book describes poxvirus diagnostics, we will not go into specific diagnostic techniques. a very important issue is the necessity to confirm any suspicion of smallpox as fast as possible to avoid false alarms with far-reaching consequences. to ensure the safety of staff involved in taking samples and performing the diagnostics, good cooperation and agreed procedures between health authorities, clinicians and laboratory staff are required. electron microscopy and nucleic acid detection are the fastest methods and can give results within 24 h. for culturing the virus, biosafety level 4 facilities are required. an initial suspected smallpox case triggers various notifications according to the requirements of national and international health legislation and regulations. furthermore, if a deliberate release of the virus seems possible, 1 an actual threat to the affected state has to be presumed. in this case, disaster management and law enforcement agencies will assist the responsible health authorities to guarantee a comprehensive management in case of a confirmation and the likely spread of the disease. epidemiological and criminal investigation should be coordinated. in addition to the national and international notifications based on given case definitions, certain measures are necessary to allow an initial risk assessment of the epidemic development. these measures should follow the algorithms of epidemiological outbreak investigation, such as the tracing and identification of exposed contacts and the sources of infection. further decisions have to be taken on the basis of a continuous risk assessment. immediate anti-epidemic measures are of considerable importance. a permanent monitoring of the epidemic is necessary to guarantee that the effectiveness of the measures taken can be accurately evaluated, which in turn can lead to new measures or to a modification of the actual strategy. the following target groups for intervention measures can be distinguished: smallpox patients must be transferred immediately to a hospital with an isolation unit for further treatment. if no adequate infrastructure is available, isolation standards should be followed as well as possible (for requirements for isolation and isolation facilities see tab. 1). most important is the vaccination of the contact persons as soon as possible within the first 4 days after exposure and their isolation and observation either at home or in hospital. contraindications, e.g., history of severe eczema or immunodeficiency have to be weighed against the risk of disease. the treatment of complications resulting from vaccination must be also taken into account. even after a deliberate release, it is rather unlikely that a major epidemic or pandemic will occur if the appropriate countermeasures are taken in time. in the event of a smallpox outbreak the population can be protected by the prompt implementation of a vaccination campaign adapted to the epidemic realities. due to the historical experience, a second eradication of the smallpox disease is possible on the basis of the known eradication measures. the bigger challenge will be the identification and elimination of the sources of the intentional release. furthermore, the spread of a smallpox epidemic can be counteracted by limiting access to public facilities and events and by restricting freedom of movement. in addition, recommending appropriate protective measures and risk avoidance behavior to the population will be helpful. it is most important that all the measures taken are communicated to the public according to best practice of a consistent risk communication. the general public has to be given consistent information adapted to target groups and the situation via the available media. information of general relevance can be broadcast nationwide by television, for example, whereas information of regional or local relevance can be transmitted via other media (radio, local newspapers, cars with loudspeakers, leaflets, etc.). the information to be disseminated will include recommendations for protective measures as well as the announcement of restrictions on entry to events and facilities. the protection of the non-infected population will necessitate quarantine measures for suspect cases. as viruses do not recognize national borders, international cooperation is also of decisive importance. this may include technical and personnel support as well as the exchange and coordination of information but also coordinated action. in the revised international health regulations adopted by the world health assembly in 2005, smallpox is one of the four diseases (the other three are poliomyelitis due to wild-type poliovirus; human influenza caused by a new subtype; severe acute respiratory syndrome, sars) for which just a single case case is considered unusual or unexpected with potentially serious public health impact, and thus must be notified (http://www.who.int/csr/ ihr/wha58_3-en.pdf, accessed 6th may 2006). who member states have 5 years to implement the necessary systems for surveillance and response including national focal points, which have to be accessible at all times for communication with the who focal points. the choice of an adequate vaccination strategy for the case of a re-emergence of smallpox in a country very much depends on the epidemic scenario one has in mind and the general availability and quality of a vaccine. at the same time, logistic aspects of the vaccination strategies have to be considered in preparedness planning, i.e., the facility and personnel resources necessary for the implementation of mass vaccinations have to be determined and identified. with the exception of the very unlikely situations of an accidental release or a natural re-emergence [caused, for example, by mutants of orthopoxviruses (camel-or monkeypox)], the only realistic scenario for a re-emergence of smallpox is a deliberate release of the agent, which does not necessarily have to follow historic patterns of epidemic spread. simultaneous and multilocal outbreaks are possible and have to be included as possible scenarios for a comprehensive preparedness planning. predictive modeling of the epidemic spread has to rely entirely on historic data and is of limited value. the availability and quality of a vaccine has the most significant influence on the strategy, as there is no evidence of an effective therapy with antiviral drugs against a smallpox infection in humans. the chosen strategy will be determined by the particular epidemiological situation and consideration of the threat of further releases and the risk of secondary infections compared with the well-known adverse effects of the currently available vaccines. unlike during a natural outbreak, the threat of additional intentional releases has to be considered for a vaccination policy. various models have been developed to assist in identifying the best use of the available vaccines (e.g., [5] [6] [7] [8] ), as well as other control measures like case isolation and contact tracing or combinations thereof [9, 10] . since all these models have different assumptions for important parameters (like r 0 ), the conclusions also vary. following historical data from the last natural, in this case imported, smallpox cases in europe in the decades before and during the eradication, the first step will be -after the immediate isolation measures have been initiated -the vaccination of contacts and simultaneous ring vaccinations. there are efforts to predict the best anti-epidemic measures on the base of mathematic modeling [7, 9, [11] [12] [13] [14] [15] . such models are fitted in such a way that they can reproduce historical outbreaks very well and try to predict the effects of different anti-epidemic measures on the basis of historical data. the quality and predictive value are limited and depend very much on the inclusion of a sufficient number of necessary and correct parameters. a slight change in a parameter can lead to exaggerated effects that do not follow the common sense experience. a lot of the decisive factors can only be roughly estimated, like transmission rate, population immunity or the effectiveness of a post-exposure vaccination. furthermore, as the re-emergence of smallpox is most likely to result from a deliberate release and multiple geographically unlinked outbreaks may be possible, this historically based vaccination strategy might seem idealistic. public and political pressure and security considerations may quickly lead to the ultimate step, the mandatory vaccination of the entire population. nevertheless, this should be done after a careful risk-benefit-calculation considering the serious adverse effects of the available vaccines. vaccination priorities: first responders, other priority groups no matter which strategy is chosen the availability of vaccine is a key issue. most industrialized countries have acquired a certain stockpile of first or second generation vaccine. the sizes of the stockpiles vary from country to country. some countries have sufficient stockpiles to cover the whole population, some do not. therefore, priority population groups have to be identified for vaccination -in accordance with epidemiological, political, ethical and societal necessities and based on a public consensus. as long as there are no smallpox cases worldwide, obligatory prophylactic vaccinations especially of entire populations are not necessary. the re-emergence of smallpox has a limited likelihood, whereas the certainty of serious adverse effects due to vaccination is a proven fact. nevertheless, it can be necessary if there is an increased likelihood of occupational expo-sure. 2 prophylactic vaccination may seem useful for the staff of special isolation units, which are most likely to treat the first smallpox cases or of those laboratories designated for confirmatory diagnostics. in this phase also members of infectious disease task forces (interdisciplinary teams on any administrative level for the initial risk assessment and subsequent investigations) may be offered vaccination on a voluntary basis. as soon as a first smallpox case is confirmed worldwide, and a real threat and exposure seem more likely, the offer of voluntary vaccination to all professional groups who are required to keep the necessary public services running during a smallpox epidemic has to be considered. these groups include mainly medical staff, fire brigades and disaster relief organizations, red cross etc., but also people working in critical infrastructures (power and water supply, public transportation and communication) or for public security and order or on the administration or political level, i.e. those population groups who are relevant for the maintenance of public life. once a smallpox case is confirmed, vaccination strategies should focus on the necessities of an anti-epidemic management. first of all the population being affected or at risk must be vaccinated. if the epidemic spread cannot be controlled, mandatory mass vaccinations will be necessary. smallpox can be spread by droplets and by direct or indirect contact with the pustules on the skin. this assumes that all primary contact persons of a confirmed smallpox case (see tab. 2) may be infected and must be identified as soon as possible. the risk of infection for persons with an extended contact time or a close contact distance is much higher than for persons with a short contact time. according to historical data, the highest risk of infection exists for household members or hospital contacts. the european outbreaks between 1950 and 1971 showed that 55% of the infected persons contracted smallpox at a hospital, 20% in the family, 14% at their working place or school and 3% of the infected persons were working in a laundry, while 8% were unidentified contacts. none of the 945 smallpox cases in europe since the second world war contracted it on an airplane, a train or a bus [16] . yet, under special conditions, an airborne transmission may be possible. in a hospital in meschede, germany, patients and nurses from the two floors above the floor where a smallpox patient was treated were infected by air circulation [17, 18] . based on publications on smallpox transmissions, table 2 describes the risks of infection. it might be impossible to control an outbreak of smallpox using only vaccination, therefore isolation of cases and monitoring of the contacts may be necessary in addition [9, 19] . quarantine in an isolation ward for all persons who were exposed seems to be the safest way, but it has some limitations, like the quantity of qualified isolation wards, the supply of the population with food, drinking water etc. and the cooperation of the population. therefore, it will be helpful to adjust the anti-epidemic measures to the likelihood of developing the disease (tab. 1) [20] . the isolation concept should be adapted to the epidemic situation, the requirements on effective isolation and the expected number of contact persons. the personnel in all hospitals/facilities must be vaccinated and trained, personal protective equipment (including gloves, masks, goggles, gowns) and means to follow the hygiene measures must be available. if prihigh risk -persons who are living in the same household with the patient and persons with a similar risk of infection (members of the family and household contacts, etc.) -persons who have had "face-to-face-contact" with a sick person, which includes all persons, who have been so close to the patient that they could be infected by droplets, or who have touched the efflorescence of the skin [e.g., friends or neighbors who have taken care of the patient, physicians who have been consulted before the hospital, hospital staff (medical doctors, nurses, cleaning staff), persons in a public traffic system with direct contact, i.e., less than ca. 2 m to the infectious case of smallpox, etc.] -persons who have been longer in the same (confined) room with a patient (e.g., work colleagues, transport staff of the ambulance, etc.) -persons who have direct contact with the dead body of a smallpox patient (e.g., undertaker, pathologist, priest, etc.) -persons who have worked with infectious samples of a smallpox patient without appropriate protection -persons who have touched scabs of a smallpox patients without appropriate protection -persons who have had direct, non-protected contact with the personal clothes, bed linen or other personal belongings, materials that a smallpox patient wore or used after the onset of fever medium risk -persons who are in the same building as a smallpox case, if this building has a ventilation system, air conditioning or comparable installation systems that circulate the air between different rooms in the building -persons who have traveled in the same compartment of a public transportation system or airplane with a ventilation system, air conditioning or comparable installation systems to circulate the air low risk -persons with a short and/or not close contact to an infectious smallpox case (e.g., a short stay in the same room, or a longer stay in the same building without ventilation system, air conditioning or comparable installation systems to circulate the air; sharing the same public transportation system without ventilation system, air conditioning or comparable installation to circulate the air; distance to the index case > 2 m) -medical staff, if they have used appropriate personal protection equipment mary contacts develop fever and other typical symptoms of smallpox, their transfer to a hospital with isolation ward is immediately necessary. for contact persons with a low risk of infection and a timely, successful vaccination, segregation at home seems to be appropriate as long as they have not developed fever, all household contacts have been vaccinated and the local health authority has the capacity to observe them daily. nevertheless, it must be kept in mind that a vaccination, even when administered in time, does not yield 100% protection. according to historical data, the risk of infection for vaccinated household contacts of a smallpox patient in the past was 3.7% [21] , in comparison to 65% of unvaccinated household contacts. these data did not give any information about when the contact persons had had their last vaccination. vaccination should also be offered to secondary contact persons. they must be registered because they will become primary contacts themselves if the originally primary contact develops the disease. since transmission of smallpox is favored by close distance between persons, so-called "social distancing" measures are considered as further intervention measures to stop the spread. whereas the isolation of cases or segregation of exposed persons (contacts) is not under debate, the effectiveness of other measures like prohibition of mass gatherings, closure of institutions or even curfews are often assessed in the light of historical events. however, they should be considered within today's ethical and societal context, taking into account differences in the society, in travel behavior, and the increased recognition of contraindications to vaccination [10] . also, the number of people who are immunocompromised (due to hiv, chemotherapy, transplantations etc.) has increased [10] . these measures are still under discussion, since we have limited knowledge of how the virus would behave today. according to the vaccination strategy described above, the majority of vaccinations would be carried out in the case of the real event. therefore, elaborate preparations have to be implemented in the pre-event phase. smallpox vaccine and bifurcated needles have to be procured and stockpiled. some governments have a national stockpile of smallpox vaccines, but not all of them have a stockpile covering the need of their entire population. therefore, multi-lateral support in the case of an event has to be assured in time. within the european union, a task force on bioterrorism was set up in may 2002 with the main objective of implementing the health security program [22] . the world health organization (who) has to convince some states to contribute to an international stockpile at who level. for national stockpiles, the logistics for storage, transport and distribution have to be determined in advance as well. to allow immediate mass vaccinations, the required infrastructure, such as facilities or personnel, has to be identified and the latter informed and trained in time. the entire process should be tested and practiced in simulation exercises. when choosing vaccination facilities important aspects have to be considered to enable the vaccination of a large number of people in a very short time, such as: -number and size of vaccination facilities according to population density -transport connections -easy access, also for handicapped people -water and energy supply -toilets -possibility of separate treatment of suspected cases -availability of rooms for personnel, first aid, treatment -phone -furniture material for documentation of the vaccinations and checking of contraindications (questionnaires, vaccination list/card) as well as information for the public has to be produced in advance and distributed to the authorities. they take care of the implementation of preparedness measures on the regional and local level. other tasks have to be achieved or initiated in the pre-event phase as well: vaccination of the vaccinators, training of the necessary staff and provision of the material needed at the vaccination facilities. a survey of over 14 million vaccinations in the usa in 1968 showed that per million vaccinations there were 75 serious adverse effects, including 1 death [23] . some of the known adverse effects that may arise from smallpox vaccination are post-vaccination encephalitis, progressive vaccinia, eczema vaccinatum or generalized vaccinia. therefore, the production of modern and more compliant vaccines is under consideration. a way to minimize the adverse events of smallpox vaccination might be the use of modified vaccinia virus ankara (mva), which was developed in the 1970s by more than 500 passages in chicken embryo fibroblasts [24] . however, smallpox had been eradicated before the efficiency of the protective effect of mva could be tested. experiments with animals indi-cate that there may be fewer complications after vaccination with mva [25] [26] [27] , and show also that mva provokes a high antibody titer and a high concentration of ifn--positive cells. some data show that mva-vaccinated animals are protected against smallpox infection [26, 28] , but other results allow the interpretation that a mva-vaccination alone can not guarantee a full protection against infection [25] . mva might be a good candidate for a pre-immunization [25] or for persons with strong contraindications [26, 29] . other replication-deficient vacv strains have also been developed for immunization [14, [30] [31] [32] . some mva strains currently under development require a higher virus titer as they do not replicate in the human body. vacv strains have the potential to inducing post vaccination encephalitis. derived from historical data with 1-2 cases per million, the vaccination of the entire population of a country like germany would lead to 80-160 cases of severest adverse effects. finally, a lot of research is being performed to develop new vaccines. experiments on a dna basis are very promising, even if these vaccines do not fully protect from infection yet [33, 34] . all the vaccines under development are still in the pre-clinical state. usually, vaccination strategies are chosen on the basis of scientific evidence and national health legislation. for the special case of smallpox, the only vaccine which has proven its efficiency decades ago is known to produce serious side effects. therefore, legal regulations for the financial compensation of vaccination damages have to be agreed upon and guaranteed before the implementation of vaccinations, no matter if they are being recommended for occupational safety reasons in the pre-event phase or as antiepidemic measure in the case of an event. more than 20 years after the eradication of smallpox only very few health professionals have practical experience with the management of this disease. therefore, all relevant professions involved in the management of a smallpox outbreak or epidemic have to be trained on the disease pattern and its specific consequences on their professional tasks. training must include the professional implementation of sampling techniques as well as safe transport, which have to be arranged in advance to avoid any unnecessary delay or hazard from improper handling or packaging. the laboratories selected for smallpox diagnostics have to guarantee that this can be done both rapidly and with assured quality. these labora-tories have to immediately report a suspected or confirmed 3 laboratory diagnosis to the appropriate authorities. public health officers, clinicians and practitioners for example have to update their knowledge on the clinical picture to guarantee an early recognition of the disease and also get familiar with the treatment and therapy of smallpox cases. laboratory personnel have to be trained in the diagnostics of smallpox on the basis of the standard operating procedures. the validity of the diagnosis is improved by regular participation in a quality assurance system. in general, if preparedness plans exist, they have to be evaluated among all the relevant groups by exercises to assure the effective operation of the plan mainly in the field of communication and cooperation. public health services might test the implementation of mass vaccinations or the reporting systems for a smallpox alert; clinicians might check the clinics' preparedness plans for cases of highly contagious diseases, ambulance services might train for the transport of highly contagious patients and all together they might check the interaction between the relevant actors aiming at a harmonization of the preparedness planning. outbreak of human monkeypox update: multistate outbreak of monkeypox -illinois advances in detecting and responding to threats from bioterrorism and emerging infectious diseases syndromic surveillance in public health practice ring vaccination and smallpox control modelling responses to a smallpox epidemic taking into account uncertainty a model for a smallpox-vaccination policy effectiveness of a postexposure vaccination for the prevention of smallpox: results of a delphi analysis a first smallpox case or first smallpox cases would need "official" confirmation in one of the two laboratories designated by who (cdc and vector) case isolation and contact tracing can prevent the spread of smallpox surveillance and control measures during smallpox outbreaks towards a containment strategy for smallpox bioterror: an individual-based computational approach containing bioterrorist smallpox transmission potential of smallpox in contemporary populations modeling a safer smallpox vaccination regimen, for human immunodeficiency virus type 1-infected patients. in: immunocompromised macaques modeling potential responses to smallpox as a bioterrorist weapon smallpox in europe, 1950-1971 a different view of smallpox and vaccination an airborne outbreak of smallpox in a german hospital and its significance with respect to other recent outbreaks in europe the recent outbreak of smallpox in meschede, west germany begriffsbestimmungen seuchenhygienisch relevanter maßnahmen und bezeichnungen smallpox and its eradication the european commission's task force on bioterrorism complications of smallpox vaccination, 1968/national surveillance in the united states der pockenimpfstamm mva: marker, genetische struktur, erfahrungen mit der parenteralen schutzimpfung und verhalten im abwehrgeschwächten organismus [the smallpox vaccinnation strain mva: marker, genetic structure, experience gained with the parenteral vaccination and immunogenicity of a highly attenuated mva smallpox vaccine and protection against monkeypox highly attenuated smallpox vaccine protects mice with and without immune deficiencies against pathogenic vaccinia virus challenge modified vaccinia virus ankara protects macaques against respiratory challenge with monkeypox virus shared modes of protection against poxvirus infection by attenuated and conventional smallpox vaccine viruses modified vaccinia ankara; potential as an alternative smallpox vaccine immunogenicity and safety of defective vaccinia virus lister: comparison with modified vaccinia virus ankara induction of potent humoral and cell-mediated immune responses by attenuated vaccinia virus vectors with deleted serpin genes genetically stable and fully effective smallpox vaccine strain constructed from highly attenuated vaccinia lc16m8 smallpox vaccines: looking beyond the next generation smallpox dna vaccine protects nonhuman primates against lethal monkeypox key: cord-244687-xmry4xj4 authors: hsieh, chung-han title: on control of epidemics with application to covid-19 date: 2020-11-02 journal: nan doi: nan sha: doc_id: 244687 cord_uid: xmry4xj4 at the time of writing, the ongoing covid-19 pandemic, caused by severe acute respiratory syndrome coronavirus 2 (sars-cov-2), had already resulted in more than thirty-two million cases infected and more than one million deaths worldwide. given the fact that the pandemic is still threatening health and safety, it is in the urgency to understand the covid-19 contagion process and know how it might be controlled. with this motivation in mind, in this paper, we consider a version of a stochastic discrete-time susceptible-infected-recovered-death~(sird)-based epidemiological model with two uncertainties: the uncertain rate of infected cases which are undetected or asymptomatic, and the uncertain effectiveness rate of control. our aim is to study the effect of an epidemic control policy on the uncertain model in a control-theoretic framework. we begin by providing the closed-form solutions of states in the modified sird-based model such as infected cases, susceptible cases, recovered cases, and deceased cases. then, the corresponding expected states and the technical lower and upper bounds for those states are provided as well. subsequently, we consider two epidemic control problems to be addressed: one is almost sure epidemic control problem and the other average epidemic control problem. having defined the two problems, our main results are a set of sufficient conditions on a class of linear control policy which assures that the epidemic is"well-controlled"; i.e., both of the infected cases and deceased cases are upper bounded uniformly and the number of infected cases converges to zero asymptotically. our numerical studies, using the historical covid-19 contagion data in the united states, suggest that our appealingly simple model and control framework can provide a reasonable epidemic control performance compared to the ongoing pandemic situation. at the time of writing, according to the world health organization, the ongoing covid-19 pandemic, caused by severe acute respiratory syndrome coronavirus 2 (sars-cov-2), had already resulted in more than thirty-two million cases infected and more than one million deaths worldwide; see [19] , and even worse, the pandemic seems to be "showing no clear signs of slowing down" in many countries. to this end, governments are persistently striving to control and slow down the spread of covid-19; e.g., minimizing the contact rate by consulting non-pharmaceutical intervention mechanisms such as lock-downs and social distancing or relying on some pharmaceutical interventions such as enhancing medical treatments, and developing possible cure or remedies. with this motivation in mind, the focal point for this paper, as a preliminary work, is to understand how the disease spreads and how it would be possibly controlled. the remainder of the paper is organized as follows: in section 2 we describe the preliminaries involving the details of our modified sird-based epidemiological model. then two problem formulations for epidemic control: almost sure epidemic control problem and average epidemic control problem are stated. subsequently, in sections 3, detailed analyses on our epidemiological model are provided. in sections 4 and 5, using a linear epidemic control policy, we provide sufficient conditions on feedback gain so that the epidemic can be properly controlled in either expected value sense or almost sure sense. next, in section 6, we describe a simple model parameter estimation approach and illustrate its use via numerical examples involving historical covid-19 data from the united states, the country among those with a relatively high confirmed cases in the middle of 2020. the control performance is also discussed. finally, in section 7, we provide some concluding remarks and promising directions for future research. in the sequel, we take k to be the index indicating the day number and u(k) ≥ 0 to be the corresponding abstract epidemic control policy implemented by governments. 1 we view that the large values of u(k) correspond to enhanced medical treatments or stringent non-pharmaceutical interventions such as mandating social distancing with wearing the masks. the smaller values of u(k) might correspond to diminished medical treatments or relaxation of the rules. in the analysis to follow, we assume that the epidemic control policy u(k) is causal ; i.e., it only depends on the past and current information that is available at hand-not the future. for k = 0, 1, . . . , let s(k) be the number of susceptible cases at the kth day, i(k) be the infected cases at the kth day and r(k) be the recovered cases at the kth day, and d(k) be the deceased cases at the kth day. take n (k) be the underlying human population at the kth day satisfying n (k) = s(k)+i(k)+r(k). as mentioned in section 1, many infected individuals may go undetected or asymptomatic [16] and the effectiveness of a control policy u(k) may be uncertain at the time, we introduce two uncertainty quantities: the uncertain rate of infected cases which are undetected or asymptomatic, call it δ(k), and the uncertain effectiveness rate of control, call it v(k), in our model to follow. now, with initial values s(0) := n (0) := s 0 , i(0) := i 0 > 0, and r(0) = d(0) = 0, we consider a discrete-time stochastic epidemiological model with uncertainties described by where d i (k) is the death rate for infected cases satisfying d i (k) ∈ [0, d max ] with d max < 1 for all k. we assume that the sequence d i (k) are independent and identically distributed (i.i.d.) random variables. in addition, we assume that both of δ(k) and v(k) are i.i.d. random variables with arbitrary distribution but known bounds 0 ≤ δ(k) ≤ δ max and 0 < v min ≤ v(k) ≤ v max ≤ 1, respectively. 2 if v(k) = 1, it corresponds to the case where the control policy is extremely effective. on the other hand, if v(k) ≈ 0, it corresponds to an extremely ineffective case; i.e., the people are not disciplined and may be unwilling to follow the medical advise for masks or social distancing; see also a discussion in section 3.5. in the sequel, we assume further that i 0 δ max < s 0 ; and v(k), δ(k), and d i (k) are mutually independent. for the sake of notational convenience, we also take the shorthand notations δ : remark on the epidemiology model. as a preliminary work, while we allow uncertainties on both infected cases i(k) and control u(k), our epidemiological model 1 assumes that u(·) can interact with the infected cases instantaneously without any time delay. the action with delay is left for future work; see also section 7. it is also worth mentioning that, if one considers a linear control policy u(k) = ki(k) with constant k ≥ 0, then a basic reproduction ratio of sorts, call it r 0 (·), can be obtained as follows: for k ≥ 0, where β(k) := δ(k)(s(k) + i(k)))/s(k) is the infectious rate and γ(k) := v(k)k is the recovery rate. if r 0 (·) > 1, the number of infected cases is expected to increase; if r 0 (·) < 1, then the infected cases decrease. finally, we should mention that, in terms of the terminology in systems and control theory, the epidemiological dynamics described by equation (1) is indeed an uncertain linear time-varying system. this observation is useful in the following sections to follow. in this subsection, we introduce the first epidemic control problem, which we call the almost sure epidemic control problem. that is, given the modified sird model 1, we seek a control policy u(·) which assures the following conditions hold: (i) the ratio of infected cases converge to zero with probability one; i.e., remarks. the results related to this almost sure epidemic control problem is discussed in section 4 when a linear control policy of the form u(k) = ki(k) with pure gain k ≥ 0 for all k ≥ 0 is applied. while almost sure epidemic control policy, if exists, can be a good candidate to mitigate the pandemic. however, in practice, some potential issues remain. for example, in some cases, the cost for implementing an almost sure epidemic control policy may be still too expensive in practice; see remark 4.1.2 in section 4. to hedge this issue, we now introduce our second epidemic control problem which we call the average epidemic control problem aimed at controlling the "expected" infected cases and "expected" deceased cases. to address the issue raised in previous subsection, we consider the second epidemic control problem which we call the average epidemic control problem aimed at controlling the "expected" infected cases and "expected" deceased cases. rigorously speaking, given the modified sird model 1, we seek a control policy u(·) which assures the following conditions hold: remark. similar to the previous remark, the results related to this average epidemic control problem is discussed in section 5 when a linear control policy of the form u(k) = ki(k) with k ≥ 0 for all k ≥ 0 is applied. in this section, to understand the contagion process and the evolution of the epidemic, we assume that there exists a control policy u(·) which assures that infected cases i(k), susceptible cases s(k), recovered cases r(k) and deceased cases d(k) are all nonnegative for all k with probability one. 3 the analytic expression for infected cases, susceptible cases, recovered cases, and deceased cases are provided. the results obtained in this section are useful for the subsequent sections to follow. in the sequel, we use a shorthand notation for integer k ≥ k 0 ≥ 0. we mention here that φ(k, k 0 ) is a state transition function for infected cases from day k 0 to k. the reader is referred to [7] for detailed discussion on this topic. we may sometimes write φ i (·, ·) instead of φ(·, ·) to emphasize that the function is related to the infected cases. the following lemma characterizes some useful properties of the function. proof. to prove part (i), we let f (k) := 1 + δ(k) − d i (k). since δ(k) ∈ [0, δ max ] and d i (k) ≥ 0 for all k ≥ k 0 ≥ 0, we have 1 − d max ≤ f (k) ≤ 1 + δ max for all k with probability one. note that the upper and lower bounds are nonnegative. hence, a straightforward multiplication leads to the desired inequality and the proof of part (i) is complete. to prove part (ii), with the aid of the fact that δ(k) and d i (k) are i.i.d. in k and are mutually independent, we obtain having proved lemma 3.1, we are now ready to characterize the infected cases i(k). given i 0 > 0, the infected cases at the kth day is described by for k ≥ 1. moreover, the expected infected cases satisfy proof. since the infected cases dynamics (1) is a linear time-varying system, the solution i(k) and its corresponding proof are well-established; hence we omitted here and refer the reader to any standard textbook in system theory; e.g., see [7] . to complete the proof, we must show that the expected infected cases can be characterized by the form described in the lemma. using lemma 3.1 and the fact that v(k) ≥ v min and are i.i.d., we observe that note that u(i), as assumed in section 2, is causal; i.e., it only depends on the information available up to the ith day. in addition, according to equation (2) , the state transition function φ i (k, i + 1) = as seen later in the next section, if one adopts the linear feedback policy of the form u(k) = ki(k), then the solution i(k) described in lemma 3.2 can be greatly simplified. the following lemma gives a closed-form expression for the susceptible cases s(k). with s(0) = s 0 > 0, the number of susceptible cases at the kth day is given by using lemma 3.2, we obtain in this subsection, we characterize the solution of recovered cases and some of its useful properties. with r(0) = 0, the recovered cases at the kth day is given by for k ≥ 1. proof. while the proof is simple, for the sake of completion, we provide a full proof here. we first recall that r(0) = 0 by assumption on initial condition. to complete the proof, we proceed a proof by induction. begin by noting that for k = 1, the recursion on r(k) tells us that to complete the proof, we now show the desired expected recovered cases and its bounds. for which is desired. it is readily seen that r(k) is increasing in k since u, v are nonnegative. the number of deceased cases and its expectation are provided in this subsection. for k ≥ 1. moreover, the expected number of death is bounded by proof. with d(0) = 0, we begin by recalling that the deaths dynamics (1) is given by d(k + 1) = d(k) + d i (k)i(k) for k ≥ 0. an almost identical proof as seen in lemma 3.2 leads to to complete the proof, we take expectation on equation 5 and, with the linearity of expectation and the fact 0 ≤ d i (k) ≤ d max , the desired inequality follows immediately. if there are low or no medical supports, i.e., u(k) ≈ 0, or the medical treatments are lacking or extremely ineffective, due to the reasons such as the people are not disciplined or unwilling to follow the medical advise to wearing masks or social distancing; i.e., v(k) ≈ 0, then it is readily verified that the susceptible cases, infected cases, recovered cases, and deceased cases become consistent with the intuition, the equation (6) above tells us that the recovered cases r(k) are close to zero approximately. on the other hand, the infected cases i(k) and the deceased cases d(k) are both increasing monotonically; i.e., an "outbreak" is seen. to mitigate the pandemic, in this paper, we consider a class of linear epidemic control policy of the form u(k) := ki(k) for k ≥ 0 where k ≥ 0 is the feedback gain which represents the degree of the epidemic control effort on providing medical treatment or mandating the non-pharmaceutical interventions. in the sequel, we sometimes call the linear epidemic control policy above as linear feedback policy or just linear policy for the sake of brevity. as mentioned in section 2, our objective of this section is to address the almost sure epidemic control problem using the linear control policy. we begin with discussing the control of infected cases in the almost sure sense. the following lemma is useful for deriving one of our main results: theorem 4.2. . then the infected cases dynamics becomes therefore, the product of f (i) is also nonnegative which implies that i(k) ≥ 0 for all k with probability one. to complete the proof of part (i), we note that for any k ∈ [0, 1−dmax vmax ], the infected cases satisfy to prove part (ii), an almost identical proof as seen in part (i) applies. that is, if k < (1 − d max )/v max , then i(k) > 0 for all k with probability one. according to part (ii) of the lemma above, we see that if k < (1 − d max )/v max , then the infected cases i(k) is strictly positive for all k with probability one. said another way, the infected cases can not be eradicated at any time. however, as seen in the next theorem, the ratio of the infected cases can go down to zero asymptotically for some k. then we have (i) lim k→∞ i(k)/i 0 = 0 with probability one. (ii) the controlled infected cases are upper bounded uniformly; i.e., i(k) < i 0 for all k with probability one. if k = δmax vmin , then i(k) ≤ i 0 . proof. to prove (i), we observe that as defined in the proof of lemma 4.1. with the assumed inequalities and feedback gain δmax vmin < k < 1−dmax vmax , it is readily verified that 0 < f (i) < 1 for all i = 0, 1, . . . , k − 1. to show the limit of the ratio i(k)/i 0 is zero with probability one, we write we also note that the logarithmic function above is well-defined for k within the assumed range. hence, now, since f (i) are i.i.d., by the strong law of large number (slln); e.g., see [10] , we have as k → ∞ with probability one. now, using the fact that the logarithmic function is strictly concave, jensen's inequality yields where the last inequality hold by the monotonicity of logarithmic function. hence, log(1+δ−kv) ≤ 0 for any k ≥ δ max /v min ≥ δ/v. therefore, e[log f (0)] < 0, which implies that lim k→∞ i(k)/i 0 → 0 and the proof for part (i) is complete. to prove (ii), we must show that the i(k) is upper bounded by the initial infected cases i 0 . as seen in the proof of part (i) of lemma 4.1, we have, for all k, where the last inequality holds by using the assumed fact that k ≥ δmax vmin . in practice, since the medical resource is limited, it is natural to put additional constraints on the feedback gain such as k ∈ [0, l] with l ≥ 1 being the constant which corresponds to the maximum allowable medical resources to be used. hence, other than the sufficient condition 7, for any δ max < vmin(1−dmax) vmax , one might require k to satisfy however, one should note that the set k might be an empty set. to see this, we consider l = 1, v min = v max = 0.01 and δ max = 0.1 and d max = 0.01. then it is readily verified that the condition δ max < vmin(1−dmax) vmax but k = (10, 99)∩[0, 1] = ∅. if k = ∅, it tells us that the infected cases does not converge to zero asymptotically and the epidemic is uncontrollable. if this is the case, one should put whatever they can to suppress the disease; e.g., by putting k := l for possible l ≥ 1. on the other hand, given the fact that the higher feedback gain would cause a higher consumption on medical/economical resources, one should choose the lowest feedback gain which is nonzero. hence, an immediate "sub-optimal" choice would simply be as follows: if δ max ≤ v min we take k * := δ max /v min with the aim that i(k) ≤ i 0 can be guaranteed at least. on the other hand, if δ max > v min , we take k * := l. or, we can write it in a more compact way as follows: provided that δ max < vmin(1−dmax) vmax where 1 {δmax≤lvmin} and 1 {δmax>lvmin} are the indicator function; see [11] for a detailed discussion on this topic. as seen later in section 6, the k * above will be adopted to test the epidemic control performance using historical data. in this subsection, the recovered cases r(k), susceptible cases s(k), and deceased cases d(k) under linear control policy are discussed. the analysis of recovered cases is simple. we begin by recalling lemma 3.4 and write r(k) = k−1 i=0 v(i)u(i). now, by taking linear control policy u(k) = ki(k) with gain 0 ≤ k ≤ (1 − d max )/v max , we have r(k) ≥ 0 for all k with probability one and where the last inequality holds by using lemma 4.1. for k = 1, 2, . . . , ⌊s 0 /(i 0 δ max )⌋ with probability one where ⌊z⌋ is the floor function satisfying ⌊z⌋ := max{n ∈ z : n ≤ z}. proof. similar to the proof of lemma 3.3, recalling that s(k+1) = s(k)−δ(k)i(k), a straightforward inductive calculation leads to using lemma 4.1, the desired form of solution follows immediately. to complete the proof, we note that thus, via a lengthy but straightforward calculations, it is readily verified that s(k) ≥ 0 for k ≤ ⌊s 0 /(i 0 δ max )⌋ with probability one. remark. while the lemma above tells us that s(k) ≥ 0 holds up to k ≤ ⌊s 0 /(i 0 δ max )⌋, we should note that the initial cases s 0 = n (0) are often far larger than the denominator i 0 δ max . hence, without loss of generality, in the sequel, we deemed that s(k) ≥ 0 for sufficiently large k. the next lemma indicates that the deceased cases under the linear control policy. for all k ≥ 1 with probability one. proof. we begin by recalling that d(k) = k−1 i=0 d i (i)i(i) for all k ≥ 1. hence, using lemma 4.1, we have note here that the assumption k ≤ (1 − d max )/v max assures i(k) ≥ 0 for all k with probability one; hence the inequality above is well-defined. to complete the proof, with the aid of sum of geometric series, we have which completes the proof. with the aids of theorem 4.2 and lemma 4.4, we see that if we take linear feedback policy with constant gain; i.e., u(k) = ki(k) and assuming that δ max < v min ( such that infected cases i(k) ≤ m d for all k with probability one. therefore, the almost sure epidemic control problem is solved. while the almost sure epidemic control policy, if exists, can be a good candidate to mitigate the pandemic, some potential issues remain in practice. that is, recalling remark 4.1.2 in section 4, it tells us that, in some cases, the almost sure epidemic control policy, while exist, may not be possible to implement. to address this, as mentioned in section 2, we now move to our second epidemic control problem which we call the average epidemic control problem. in this setting, the aim now becomes to control the "expected" infected cases and "expected" deceased cases in this section, we provide our results on control of epidemics in the sense of expected value. we begin with discussing the control of expected infected cases. the lemma below provides an useful analytical expression for the expected number of infected cases. proof. the proof is straightforward. we begin by noting that, for any linear policy u(k) = ki(k) with gain k ∈ [0, (1 − d max )/v max ], i(k) ≥ 0 for all k with probability one and i(k) = k−1 i=0 (1 + δ(i)−d i (i)−kv(i))i 0 . since δ(k), d i (k) and v(k) are i.i.d in k and are mutually independent, taking the expected value on the i(k) above yields and the proof is complete. with the aid of the lemma above, we are now ready to provide our second main result. proof. the idea of the proof is similar to the one used in theorem 4.2. however, for the sake of completeness, we provide our full proof here. we begin by assuming that δ < (1−dmax)v vmax and δ−di v /v max , it implies that i(k) > 0 for all k with probability one. now, according to lemma 5.1, we have where note that hence, log(1 + δ − d i − kv) < 0 for any k > (δ − d i )/v and the logarithmic function is welldefined for k within the assumed range. therefore, we have log e[f (0)] < 0, which implies that lim k→∞ e[i(k)]/i 0 → 0 and the proof for part (i) is complete. to prove part (ii), we fix k and simply note that, with the assumed assumptions on δ and k and the fact that d i (·) ≥ 0, it is readily verified that e[i(k)] ≤ i 0 , which completes the proof of part (ii). to prove part (iii), take k = (δ − d i )/v and substitute it back into the equation 12, we obtain e[i(k)] = i 0 for all k and the proof of part (iii) is complete. similar to remark 4.1.2, to accommodate the practical considerations, we fix l ≥ 1 and require due to the limited available medical resources, it is reasonable to choose a "sub-optimal" k in the sense of minimizing the potential use of medical resources. for example, we let 1 {δ≤lv} and 1 {δ>lv} be the indicator functions and consider provided that δ < v(1−dmax) vmax to be our choice of feedback gain for controlling of epidemics in the sense of expected value. note that k * > δ−di v for d i > 0; hence, the theorem above applies. this usage of k * will be also seen later in section 6. where the last inequality holds by using lemma 5.1. the next lemma states the expected susceptible cases. we begin by recalling that lemma 4.3 tells us that s(k) ≥ 0 for stage k = 1, 2, . . . , ⌊s 0 /(δ max i 0 )⌋. now using the facts that δ(k) are i.i.d. in k, and δ, d i and v are mutually independent, we have using the geometric series, we conclude which completes the proof. for k ≥ 1, any linear feedback control policy of the form u(k) = ki(k) with 0 ≤ k ≤ (1 − d max )/v max yields the expected deceased cases proof. the proof is similar to lemma 4.4. using lemma 4.1 and the fact that v(k), d i (k), and δ(k) are independent, we have it is easy to verify that d i (i) and (1 + δ(j) − d i (j) − kv(j)) are independent for all j = 0, . . . , i − 1; hence, we have using the geometric series, the equality above reduces to which is desired. for u(k) = ki(k), one can readily verify that with the aids of theorem 5. such that expected deceased cases e[d(k)] ≤ c d for all k. therefore, the average epidemic control problem is solved. in the next section to follow, we provide an illustrative example using historical covid-19 data to demonstrate our epidemic control performance. we now illustrate the application of our control methodology on the epidemiological model using historical data for year 2020 available in https://ourworldindata.org/coronavirus, which contain the number of daily confirmed cases, denoted by c(k), and daily reported deaths, denoted by d(k) for k = 0, 1, . . . , n − 1 for some fixed integer n . to study the epidemic control performance, there are various way to estimate the uncertain parameters; e.g., one can consult "standard" approach such as minimizing the least-square estimation error to obtain the "optimal" parameters δ, d i and v; e.g., see [5, 6, 9] . however, for the sake of simplicity, we now provide a simple "mean-replacing" approach to estimate the uncertain rate of infected cases δ(·) and the death rate d i via the available data of confirmed cases and the number of reported deaths. that is, we begin by recalling that the deceased cases satisfy d(k + 1) = d(k) + d i (k)i(k). for k = 0, 1, . . . , n − 1, given the confirmed (infected) cases c(k) and reported deaths d(k) and taking c(k) := i(k) and d(k) := d(k), we obtain then, the estimate of death rate d i (k), call it d i , is defined by 4 having found d i , we can now estimate the remaining two uncertain quantities δ(k) and v(k). to this end, assume that the adopted epidemic control policy is linear of the form u(k) = ki(k) with k := 1; i.e., we ideally assumed that government had put all available resources to control the pandemic. now, setting c(k) := i(k), d i := d i (k), and replacing v(k) by its mean v, we obtain with the aid of equation (16), the estimate of δ(k), call it δ, is given by having obtained the estimates δ and d i , we then ready to apply the epidemiological model and use it to compare it with the available data; see next subsections to follow. using the data from the start of the epidemy, over a horizon starting from march 1, 2020 to september 8, 2020, we obtain the estimates δ with δ max := δ ≈ 0.5135 and d i := d max ≈ 0.0449. in addition, we take l = 2 and assume that the effectiveness rate of control v(k) follows a uniform distribution with v min = 0.1 and v max = 0.2; i.e., 10% to 20% effectiveness rate on control. observe that hence, theorem 4.2 does not apply. however, we can still choose a suboptimal feedback gain discussed in section 4.1.2; i.e., this means that, if the infected cases can not be well-controlled in almost sure sense, governments should put whatever they can to suppress it; here, we see a 100 · l% feedback gain is applied. the epidemic control performance is shown in figure 1 where the black solid lines depict the confirmed cases c(k) and reported deaths d(k) from historical covid-19 data in the united states. the other thinner lines with various colors depict the epidemic control performance, in terms of c(k) and d(k), under the linear policy u(k) = kc(k) with k := l. interestingly, the figure also tells us that if l = 2 is possible, it may yield, in average, a lower confirmed cases. with the aid of equations 15 and 17, we obtain the estimates d i ≈ 0.002 and δ := δ ≈ 0.215. similar to the previous example, we again assume that v(k) follows a uniform distribution with v min = 0.1 and v max = 0.2; hence the average v = 0.15. it is readily verified that our estimates satisfy hence, theorem 5.2 applies if we take the control policy above tells us that the government may need to bring in extra medical resources to achieve k > 1. the corresponding epidemic control performance is shown in figure 2 where the confirmed cases is on the top panel and the deaths is on the bottom panel. in the figure, we see a downward trend occurs on the confirmed cases and the saturated deaths as time increases. the figure 3 shows, with y-axis in log-scale, the epidemic control performance comparison where the black solid lines are the reported confirmed cases (top) and reported deaths (bottom). this preliminary work has been done in the urgency of the ongoing covid-19 pandemic, with the mind of providing a simple yet explainable epidemiological model with a rigorous study on the effectiveness of a linear epidemic control policy class. consistent with the existing literature on epidemic modeling and control, this paper considers a modified stochastic sird-based model. we analyzed the model and considered two epidemy control problems: one is the almost sure epidemic control problem and the other is the average epidemic control problem. then, for both of two problems, with linear control policy, we show sufficient conditions on feedback gain so that the epidemy is deemed to be "well-controlled" in the sense that infected cases goes down to zero asymptotically, and both infected cases and deceased cases are upper bounded uniformly. subsequently, we provide a simple data-driven parameter estimations and show some promising numerical results using historical covid-19 data in the united states. based on our work to date, two important directions immediately present themselves for future work which described in the next subsections to follow. it is possible to extend our analysis to involve multi-population epidemics. to illustrate this, below we consider only the infected cases dynamics. fix m populations, then for each population i = 1, . . . , m, we write with limited medical resource for some u max > 0 and the overall infected cases are it is also possible to consider the case where the control policy is with delay effect; i.e., u(k −d) with delay timed; see also [13] for a discussion on handling the time delay considerations in a class of positive finance systems. another possible research direction would be to take the economic budget into play. as seen in remark 4.1.2, the idea of minimizing the expenditure induced by implementing control policy may lead to a version of "optimal" choice for feedback gain k. in particular, one can even consider the budget dynamics and carry out the optimization. specifically, let b(k) be the available medical budget at the kth day which satisfies b(k + 1) = b(k) + p(k)u(k) where p(k) is the price to pay per control policy at each stage, which can be modeled as a random variables with finite support. then, it is readily verified that for k ≥ 1. from here, there are many possible directions to pursue. for example, one can consider a optimization problem which minimizes the expected budget cost and infected cases; i.e., one might seek to find a sequence of k which solve inf k e[βb(k) + γi(k)] for some constants β and γ. data-driven methods to monitor, model, forecast and control covid-19 pandemic: leveraging data science, epidemiology and control theory some discrete-time si, sir, and sis epidemic models, mathematical biosciences infectious diseases of humans: dynamics and control the mathematical theory of infectious diseases and its applications a data-driven control-theoretic paradigm for pandemic mitigation with application to covid-19 a modified sir model for the covid-19 contagion in italy linear system theory and design a time-dependent sir model for covid-19 with undetectable infected persons fitting dynamic models to epidemic outbreaks with quantified uncertainty: a primer for parameter uncertainty, identifiability, and forecasts probability: theory and examples probability and random processes for electrical and computer engineers the mathematics of infectious diseases, siam review on positive solutions of a delay equation arising when trading in financial markets containing papers of a mathematical and physical character robust and optimal predictive control of the covid-19 outbreak substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov-2) how control theory can help us control covid-19 world helth organization, weekly epidemiological and operational updates key: cord-002426-5e1xn7kj authors: falcón-lezama, jorge abelardo; santos-luna, rené; román-pérez, susana; martínez-vega, ruth aralí; herrera-valdez, marco arieli; kuri-morales, ángel fernando; adams, ben; kuri-morales, pablo antonio; lópez-cervantes, malaquías; ramos-castañeda, josé title: analysis of spatial mobility in subjects from a dengue endemic urban locality in morelos state, mexico date: 2017-02-22 journal: plos one doi: 10.1371/journal.pone.0172313 sha: doc_id: 2426 cord_uid: 5e1xn7kj introduction: mathematical models and field data suggest that human mobility is an important driver for dengue virus transmission. nonetheless little is known on this matter due the lack of instruments for precise mobility quantification and study design difficulties. materials and methods: we carried out a cohort-nested, case-control study with 126 individuals (42 cases, 42 intradomestic controls and 42 population controls) with the goal of describing human mobility patterns of recently dengue virus-infected subjects, and comparing them with those of non-infected subjects living in an urban endemic locality. mobility was quantified using a gps-data logger registering waypoints at 60-second intervals for a minimum of 15 natural days. results: although absolute displacement was highly biased towards the intradomestic and peridomestic areas, occasional displacements exceeding a 100-km radius from the center of the studied locality were recorded for all three study groups and individual displacements were recorded traveling across six states from central mexico. additionally, cases had a larger number of visits out of the municipality´s administrative limits when compared to intradomestic controls (cases: 10.4 versus intradomestic controls: 2.9, p = 0.0282). we were able to identify extradomestic places within and out of the locality that were independently visited by apparently non-related infected subjects, consistent with houses, working and leisure places. conclusions: results of this study show that human mobility in a small urban setting exceeded that considered by local health authority’s administrative limits, and was different between recently infected and non-infected subjects living in the same household. these observations provide important insights about the role that human mobility may have in dengue virus transmission and persistence across endemic geographic areas that need to be taken into account when planning preventive and control measures. finally, these results are a valuable reference when setting the parameters for future mathematical modeling studies. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 dengue fever (df) is the most important arthropod-borne viral disease in the world. it is caused by infection with any of the four dengue virus (denv) serotypes. nearly half of the human population inhabits areas with denv transmission. in mexico dengue incidence and severe cases have been increasing in the last decade. to date, there is no specific treatment or vaccine for df and vector control stands as the cornerstone for df prevention [1, 2] . df is an important public health problem, especially in urban areas [3] , where it usually presents in large outbreak. the costs of treatment and management during a df outbreak are a serious burden for health systems, especially when there is a risk for saturation of health facilities [4] . the actors that are necessary for denv transmission are fairly well understood. nonetheless, some of the dynamical features about these actors still need to be elucidated in order to understand how they impact on transmission. human mobility has been studied in relation to other infectious diseases, where its role as an important driver for disease transmission has been proven [5, 6] . mathematical models have suggested that local scale human mobility may play a role in denv transmission, outbreak persistence, and control efficiency [7, 8] . however, little information is available on detailed human mobility patterns in geographic areas where df is endemic or on confirmed cases during an outbreak. recently, gps-based technologies have been tested and shown to be a reliable and acceptable tool for quantifying human mobility [9, 10] . human mobility has been described in relatively recent reports by using indirect measures [11, 12] . for these reasons, we studied the micro and macromobility of dengue virus-infected subjects in an endemic locality. here we present the results of a cohort-nested case-control study on a dengue endemic urban locality in mexico. the protocol of the project was reviewed and approved by the comité de etica y de prevención de conflictos de interes (institutional review board) ci 1046 no 1160 and departamento de investigación (external review) servicios de salud de morelos, mexico dei/cei/0281/2012. cohort-nested, case-control study. sample: 126 individuals (42 cases, 42 intradomestic controls and 42 population controls) with age older than 12, and residents in axochiapan, morelos state, méxico, were selected from the cohort "peridomestic infection as determinant for dengue virus transmission" [13] . they were assigned into three study groups: a. cases were individuals with laboratory evidence of recent, symptomatic or asymptomatic, denv infection, and identified as the only persons infected within their households during the study. b. intradomestic controls, were individuals with a negative serological result for recent denv infection, living in the same household with a case; c. population controls were individuals with a negative serological result for recent denv infection, randomly selected from the same locality. all controls tested negative for denv during the same period and using the same validated techniques than cases (igm or igg capture elisa) as reported in the cohort study [13] . partici-pant´s selection was performed as follows: cases were approached first, if accepted participation an intradomestic control was randomly assigned from the pool of subjects living in the same house that had both baseline and final negative elisa results. for each pair of in-house participants, a randomly selected population control was then assigned (fig 2) . given the limited number of available gps loggers and the three-month time frame for the follow-up, the recruitment was limited to a maximum of 50 cases with their respective intradomestic and population controls. between may and september, 2012, with prior signed informed consent, all participants were provided with a portable gps (gps data-logger, transystems mod. 747 a+), programmed for recording its position at 60-second intervals (variables date, time, latitude, longitude, altitude and speed), during 24 hours a day, for a minimum period of 15 days. participants were instructed to carry their gps at all times whenever they left their homes and to recharge the equipment's battery daily during their in-home resting times. this follow up was performed in cases identified during the immediate previous season, on average one year after diagnosis confirmation, in order to match the activities performed during the high transmission season, and under the assumption that their mobility patterns remained unchanged after disease, and constant through time. a web-based interface was developed to import the text files from gps equipment to a main database. variables were homogenized, waypoints in the initial and final days of each individual gps track (comprising incomplete days) were eliminated in order to standardize the period of time to be analyzed starting at 00:00:00 hours on the second day of follow up and ending at 23:59:59 on the last to final day of follow up. data were converted into a feature dataset and projected from the geographic coordinate system to a lambert coordinate system (from hexadecimal to metric units) with the purpose of performing arithmetic operations for distance calculations. origin or routinely residence sites were identified by means of an iterative algorithm (mean center) employing waypoints from 00:00:00 to 04:59:59 hours, monday to friday. routine residence coordinates were added to the database. distance to home variable (dhome) was calculated for each extradomestic waypoint using sql applying the following formula: where; x 1 = xhome (x coordinate from home), x 2 = xccl (x coordinate from each waypoint), y 1 = yhome (y coordinate from home) y, y 2 = yccl (y coordinate from each waypoint). waypoints within the peridomestic area (dhome < 50m) were identified. distance, speed, altitude and time differentials were created: where: xccl 1 = (x coordinate from previous waypoint), xccl 2 = (x coordinate from current waypoint), yccl 1 = (y coordinate from previous waypoint) and, yccl 2 = (y coordinate from current point) displacement and spatial permanency variables for each subject with complete data were generated. visit sites were defined as those areas out of the individual's home with a 50 m radius in which each participant remained static for a period enough to allow a potential effective interaction with local vectors. these sites were identified by generating an algorithm through which visit clusters were formed using the following criteria: stops lasting 5 minutes or longer, a distance from home of 50 m or farther, distance of the current waypoint from previous waypoint < 50 m, and speed for the current waypoint of 2 km / h or less. for each cluster (visit site) a centroid was calculated. common visit sites for cases were identified as hexagonal cells with 50 m radius [14] which were visited by at least two different cases at a given time, and where the proportion of different visiting cases was at least two thirds of the total visiting population for that cell. common visit sites for controls were identified as hexagonal cells with 50 m radius which were visited by at least two members from each control population and not visited by any of the cases. the geographic universe in the study was divided in five areas (1.-inside the house, 2.-out of the house but in the locality, 3.-out of the locality but in the municipality, 4.-out of the municipality but in the state, 5.-out of the state), limited by four buffers. a circular buffer with 50 m radius around each participant's home limited the first area and three additional polygonal buffers were drawn according to the administrative limits for the locality, municipality and state. arcgis arcinfo 10 was used for processing and analyzing spatial data, sql server was used to create a geodatabase, arc sde 10 was used as interpreter between entre sql and arcgis. statistical analyses were performed using stata 12. fifty randomly selected cases were asked to participate in the study from which 42 (84%) accepted participation. all approached controls agreed to participate. in total 126 individuals (42 cases, 42 intradomestic controls and 42 population controls) were recruited. our drop-out rate was lower than 1% (1/126) since one participant (intradomestic control) did not finish the follow-up due to the loss of the assigned gps logger. table 1 describes the main characteristics of the subjects in each group. no statistically significant differences were observed in most of variables except in age, since cases were significantly younger than the intradomestic or population controls (cases mean: 29.2, sd: 17.7; intradomestic controls mean: 35.9 sd: 13.5; population controls mean: 37.4 sd: 16.6. p = 0.0315). of 126 participants, 125 (99.2%) participants completed their follow up since one gps used by an intradomestic control went missing. the final database contains 3,064,887 waypoints from these 125 participants, and all participants were followed by a mean of 15.9 continuous days. as for the number of days of follow-up for each group no differences were recorded. as expected, most of the waypoints in the population fell within the intradomestic area (< 50 m radius from home centroid). as distance from home (absolute displacement) increased, we observed a marked decrease in the proportion of waypoints. all three groups presented a small peak when the distance reached the 100 m radius. from this point the proportion of waypoints quickly decayed (fig 3) . no differences were noticed for absolute displacement among the groups. the hourly distribution of recorded waypoints out of the participant's homes is shown in fig 4. as expected, participants usually left their homes early in the morning and returned by the end of the day. although we recorded waypoints out of the participants' homes in every hour of the day, the period comprised between 08:00 pm and 12:00 pm registered the peak in the number of waypoints recorded out of the homes, and this number decreased steadily as the day progresses. the pattern during weekdays ( fig 4a) suggests that cases leave their homes and return to them slightly earlier than control groups. as for the weekends (fig 4b) , both control groups show a similar pattern to that observed for weekdays, nonetheless, cases seem to remain in their homes more often and return earlier. table 2 shows values for different mobility variables. no significant differences were recorded among groups for the following variables: mean distance from home at all times, maximum recorded distance at any given time, and mean time spent in each geographic area at any speed or at static speed. nonetheless, when comparing the number of visits per geographic area, the cases had fewer recorded visits in the area out of the locality but in the municipality (3.1 vs 18.7, p = 0.0428), and more visits in the area out of the municipality (10.4 vs 2.9, p = 0.0282), both compared to intradomestic controls. these differences were statistically significant. consistent with this behavior, although non-statistically significant, cases visited more states, municipalities, and regions with high dengue incidence through their follow up, in comparison to both control groups. we next examined the proportion of waypoints recorded by the comparison groups in each area stratified by age (fig 5) . there is a notorious difference in the proportion of waypoints among the cases, observing an increase of nearly 8 percentile points in the intradomestic waypoints, recording the highest frequency in cases under age 25 (fig 5a. however the difference not statistically significant (cases < 25: median 90.2%, interquartile-range 75.5-95.5; cases ! 25: 80% iqr 66-86.4; p = 0.079). we also observed a difference in the area out of the municipality but in the state, where the group of cases aged 25 and older spent the highest proportion of time (age < 25: 0% iqr 0-1%; age ! 25: 1.1% iqr 0-4.6; p = 0.0233). there was no significant difference between cases and population controls, regardless of age. when comparing cases versus intradomestic controls, we observed a statistically significant difference in time spent in area out of the municipality but in the state (cases: 1.1% iqr 0-4.6; ic: 0% iqr: 0-0.6; p = 0.009). we found differences in mobility patterns when analyzing data by gender ( fig 5b) . women had a higher proportion of waypoints within the intradomestic area than men. these differences were statistically significant for intradomestic area (male: 77.4% iqr: 65.8-86.9; female: 89.4% iqr 80.5-93.5; p = 0.002), out of their homes but in the locality (male: 15.5% iqr 5.3-25; female: 7.2% iqr 4.6-16.5; p = 0.0063) and the area out of the locality but in the municipality (male: 1.1% iqr: 0.1-4.9; female: 0.1% iqr: 0-0.4; p<0.0001). nonetheless, linear mean (p = 0.061) and maximum (p = 0.1468) distances were not. a) cases vs. intradomestic controls. a conditional logistic regression analysis was performed, including variables identified in the bivariate analysis as having a p value < 0.20, and by data mining techniques using all variables as reported previously [15] . for bivariate analysis variables were age (continuous and dichotomic [under 25 or 25 and older]) gender, occupation (intra or extradomestic), education (dichotomic), and the proportion of time spent in each geographic area. for the data mining we considered the whole data base. the final model included age (or: 0.015 ic95% 0.0005-0.488; p = 0.018) and the area out of the municipality but in the state (or 2.61 ic 95% 1. 16-5.88 ; p = 0.021). we observed a protective effect in the 25 and older group, and a risk effect when the proportion of time spent in the area out of the municipality but in the state is increased. b) cases vs. population controls. a multiple logistic regression analysis was performed including variables identified with p value < 0.20 in the bivariate analysis and data mining techniques using all variables as reported previously [15] . for bivariate analysis, only the variables age (continuous and dichotomic [under 25 or 25 and older]), gender, occupation (intra and extradomestic), education (dichotomic), proportion of time spent in each area and linear distance were taken into consideration. for the data mining we considered the whole database. the final model included: occupation (or 3.02 ic 95% 1.02-8.88; p = 0.045), proportion of time in the area out of the municipality but in the state (1.42 ic 95% 1.02-1.98; p = 0.035), proportion of time in the area out of the locality but in the municipality (or 0.74 ic 95% 0.57-0.96; p = 0.023), and age (or 0.26 ic 95% 0.09-0.78, p = 0.017). next we analyzed the geographic distribution of the recorded waypoints for each group, both locally and regionally (fig 6) . all three groups recorded waypoints exceeding a 100 km radius from their homes (fig 6a, 6b and 6c ). as expected, most of recorded waypoints were located within the locality. nonetheless, all three groups recorded waypoints exceeding the locality, municipality and state limits. these trajectories were headed mainly to the east, north and west of the locality, and were consistent with the location of the main cities in the area, including cuernavaca (population 338,650) and cuautla (population 154,358), the capital city and the second most important city in the state of morelos, respectively. the geographic distribution of the visits performed by cases and df cumulative incidence for the central mexico region, during year 2012, is shown in fig 7. as seen, this group performed visits to locations with and without df transmission. the number of states, municipalities and regions with high dengue incidence is shown in table 2 . within the locality of axochiapan the most visited areas were identified by dividing the locality in 50 m-radius hexagonal cells (fig 8a) . the most visited cells were those located in the locality's central area, which correspond to the location of the main market, road junctions and main administrative and / or service offices. the location of the cells considered as common visit sites for cases are shown in fig 8b. unlike in fig 8a, the geographical distribution of these 15 cells tends to be peripheral with respect to the locality. using google earth™ we identified the geographic features of each of the 15 cells that was classified as a common visit site for cases. fourteen out of fifteen cells were geographically located within the locality of axochiapan, morelos, and the last one was in the central area of a neighboring small locality (town of tzicatlán) in the state of puebla. as for their typology, xix out of 15 cells clearly corresponded to residential areas (including that in the neighboring state), one cell was a residential area adjacent to a local large business, four cells included small processing plants, a warehouse and a local business, three peripheral cells were crop fields and one cell was clearly a soccer field. previous works have used gps tools for measuring exposure to infectious diseases [16] [17] [18] . in df, recent works in the endemic area of iquitos, peru, have elegantly described human population mobility [19, 9] . however, few data are available for infected cases so far. our work builds upon our knowledge of the role played by mobility in denv transmission documenting spatial mobility of subjects from an endemic region that had, or had not been recently infected by denv. our data show that the people from axochiapan stay within their houses or surrounding areas most of the time. this is consistent with previous observations in iquitos, where population rarely moves more than 1 km away from their homes [10] , however, some individuals recorded movements to very distant locations through the relatively short follow-up period. these movements were present in all three study groups and exceeded a 100-km radius from the center of the study, covering the neighboring states of morelos, state of mexico, puebla, mexico city, tlaxcala and hidalgo. all of these states are located in the mexican central plateau, which is also the best connected region in the country and therefore it is not difficult to reach those destinations by commute travel [20] . surprisingly, spatial mobility in humans was not geographically symmetrical in our study, since no movements were recorded to state of guerrero, which is a coastal, highly endemic area for dengue and also a popular destination for leisure activities. as for the reason why the mobility of the individuals is biased towards central plains in mexico and practically absent towards the southern regions, it was a surprising finding also for us, but we think that it has to do with two factors: first the economic activities in axochiapan are mainly related to agriculture, trade and services which are strongly influenced by the needs of mexico city and its metropolitan area, comprised also by the states of morelos, puebla, méxico and hidalgo. the main cities of these states were those that were visited by the cases and in a lesser extent by the controls. secondly, we did not perform any follow-up during summer and christmas holidays, which in mexico are specific periods for leisure. these activities are usually performed in places that might be different that those observed in our study, including the beaches in the southern coast. it is possible that had we performed our follow-up in vacation periods the observed mobility might have been different, and also leave us with a very interesting research question for the future. the large size of the area covered by these few individuals from a small locality (axochiapan has roughly 17,000 inhabitants) is of capital importance, given the fact that in mexico, and probably in many other places, epidemiological surveillance, prevention and control activities for df are mainly planned, supported and executed by local health authorities, who rely on the information generated by a number of systems, most of them automated [14] , but that are usually restricted to their local administrative limits, namely municipality, sanitary jurisdiction or state at best. thus when df outbreaks overcome those limits and a wider coordination is needed, it is probably that the outbreaks are already established and the window of time for effectively applying control measures has been lost. our data show that the cases group had the largest difference on the time spent in the home area with strong age dependence. older cases spent less time in their homes compared to younger cases. as far as the number of visits is concerned, subjects in the cases group, especially those aged over 25, performed many and more distant visits, than subjects in the intradomestic control group. this difference with the population control group was less marked. this scenario suggests that the population aged over 25 might play an important role in denv persistence and dispersion perhaps working as geographic spreaders. our group previously determined dengue incidence for this age group and proposed a dynamic model which seems to be corroborated by the results presented here [13] . infected individuals, both symptomatic but also asymptomatic [21] , may facilitate the infection of extradomestic mosquito populations in a local scale as models have suggested [8] , or at a regional scale introducing or exporting the virus. given the fact that we performed an uninterrupted 24-hour follow up, we were capable to register the time when individuals left their homes for whatever activity they performed. to our surprise, we recorded waypoints out of the participant's homes virtually at any hour of the day. although the majority of records show that people in axochiapan have a day-light pattern of activities, we recorded waypoints from cases, between 00:00 and 05:00 am, which were consistent with participants' declared jobs, which were related to nocturnal activities such as bakers and workers from the local stone processing plant. the dispersion patterns described above, both in space and time, might be of importance in the results obtained in the control of denv transmission in this and similar small localities. the usual schedule considered by local health authorities for applying preventive measures, which favors early hours for insecticide spraying and the visits by entomological control brigades, in a geographically focalized strategy might hinder the efficacy of the actions by the mere fact that people moves from their homes and remain away during the time these actions are normally applied. in our data, the hourly pattern for activity suggests that cases might leave earlier their homes during weekdays, and thus their homes might have a higher probability to remain closed by the time health authorities apply preventive or control measures. it is important to notice the high mean age of the cases in axochiapan, in both the participants in the study and those recorded historically in the state of morelos, in comparison to other endemic areas from mexico and the americas. this is however, consistent with a previous work in the area [13] , and is probably due to the fact that most (35 out of 42 members of the cases group) of the cases that we studied were asymptomatic, also, although not statistically significant, the mean age of the asymptomatic individuals was higher than that from symptomatic individuals (mean age asymptomatic: 31.54 vs mean age symptomatic: 17.43, t test p = 0.0535). thus, suggesting a possible stronger role of asymptomatic population with age above 25 as spreaders for denv transmission. previous studies have described that visiting other cases' households is a risk factor for denv dispersion [22] ; working sites have also been suggested as possible transmission sources outside of cases' households [23] . models have shown that sites outside homes can play a role in denv transmission and infection [7] , and a recently published work showed positive correlations in thailand between the aedes spp. house index and specific landscape features [24] . our findings are consistent with these data since cases coincided in houses different to their own in at least five different geographical locations. additionally we found cases that coincided in four potential working places, and a soccer field. as far as we know, this is the first time that leisure sites have been documented as possible areas for denv transmission. the lack of study of such sites has previously been pointed out as a weakness in the study of human mobility [25] . as for the common visit sites for controls, we identified 38 cells that were visited only by individuals from both control groups but not by cases. these cells included only one potential working site; whereas the cells commonly visited by cases included several likely workplaces. the identification and study of the extradomestic sites where people coincide is relevant: a recent simulation model has concluded that the selection of areas for df control out of the cases' homes is important not only in terms of the time the subjects spend in them, but also because of the local vector:host ratio, and the other habitual destinations of people that visit the same area [8] . it is possible that much of the movement in a given society is driven by specific population needs and the possibility to fulfill them within or outside from their own locality. axochiapan is a fairly small and well connected locality to other small cities, all of them endemic for df. it is not unrealistic to think that some of the population needs can be readily fulfilled within the same locality whereas other cannot, compelling the population to move away in a permanent or transitory fashion. if a specific population such as that with age older than 25 becomes infected and effectively play a role as spreaders, then perhaps small and peripheral localities to larger cities might have a key importance in sustaining denv transmission across large geographic areas. the assessment of such situations requires a critical review of existing data and the generation of specific studies that may help us to recast current models and more importantly, to completely understand urban denv transmission [26] . we have identified some weaknesses in our study. the most relevant is our small sample size which might have hindered our capability to identify clear patterns in the mobility from this mexican community. there is a possibility that any individual belonging to either control group might have get infected during the follow up; thus making her/him eligible to become a case, therefore disqualifying him/her for being a suitable control and consequently introducing an information bias; nonetheless, we believe that, although a possibility, this was negligible due to two reasons: first, the follow-up of 15 days was very short for this event to occur, and secondly because while recovering each gps, we asked all individuals whether they had experienced fever or any other symptom suggesting dengue infection during the follow-up. none of the participants reported any change in their health status. although we understand that a more robust argument to ensure the infected/uninfected status might be performing an elisa to each control after finishing their follow-up in order to be certain about their exposure, financial constrains made impossible this procedure. possible future improvements in our study are: increase the limited sample size, the inclusion of adequate representation for the population under the age of 12, which probably is a relevant group for transmission during the initial and focalized phases of a df outbreak. although no schools could not be identified in our study as a common site visited by cases, this observation needs to be taken cautiously since our study did not consider the follow-up of children usually studying elementary education. thus, we cannot rule out any role of these sites in dengue transmission at younger ages. additionally, extending the duration of the follow up might improve the chances of successful identification of patterns whose frequencies are longer than a week, such as wage collection, bill payments, and communitarian meetings, among others. this last topic is essential; nonetheless it is limited by technical issues that might be addressed as technology for massive and continuous long-lasting follow up becomes available. finally, the main reasons for which the participants move were not deeply explored in our study, thus we cannot be certain whether the recorded movements indeed depend on non-satisfied needs or on leisure activities. we can only assume that at least those movements performed during the mornings and afternoons between monday and friday correspond to real needs such as employment, education, and supply acquisition, and those performed during weekends are related to leisure. some causes for loss of gps information in field studies have been recently described [27] . although some of these causes might be present in our study, we believe they did not represent significant sources of bias or information loss, since we took some specific measures. for example, people were prevented of accidentally turning the gps off by strapping a tape in the controls. in order to diminish the probability that the participants could forget their gps units at home we performed a weekly phone call reminding them the importance of the usage attachment according the protocol during each individual follow-up. barriers to signal were not important in the studied area since it is located in a plateau with few elevations, and buildings taller than 3-stories are practically absent. finally, the gps equipment used in the study had battery autonomy of up to 32 straight hours and enough memory for recording up to five times the mean number of waypoints programmed to collect in each subject. based on our own data and that from recently published works we conclude that gpsbased technology is a solid tool for the study of detailed human mobility in denv transmission or other infectious diseases, which can and must be adopted in public health and epidemiology as a basic instrument. the important geographic dispersion in our results demonstrates the necessity for studying the potential role that human mobility has in denv transmission and outbreak duration and also a strong argument to study and clarify the role that asymptomatic cases might have in dengue virus dispersion. furthermore our data strongly suggest that the size of the areas considered for prevention and control of df outbreaks needs to be revised and that it is necessary to integrate this knowledge into the planning of preventive and control measures, which usually are prone to using basic shapes such as circles or squares as geographic references in order to define limits, ranges, trajectories and points of origin. it is clear that human populations move normally across geographical areas and not only during holidays or vacations. according to our data, the magnitude of these displacements is larger than that considered as an administrative responsibility for local health services providers. this is relevant for denv transmission if a large fraction of that mobile commuting population is also asymptomatic but viremic, facilitating with their movements the exposure of local uninfected mosquito populations with the virus, which might result in an increased geographical dispersion and persistence of the outbreaks due a continuous process of spreading and reintroduction of the virus to susceptible populations. finally, we believe that the data here reported should be valuable for parameterization of mathematical models exploring specific issues in dengue epidemiology such as geographical dispersion of human activities, contact rate among humans in intermediate spots, optimal range for vector control coverage, optimal target places for health promotion activities, impact of coordinated regional collaboration, and transmission dynamics among satellite and large cities. all essential topics that are still to be understood and weighed as drivers in the transmission of this and other mosquito-transmitted diseases. dengue and dengue heamorrhagic fever world health organization. dengue guidelines for diagnosis treatment, prevention and control urbanisation and infectious diseases in a globalised world the global economic burden of dengue: a systematic analysis travel implications of emerging coronaviruses: sars and mers-cov effect of travel on influenza epidemiology man bites mosquito: understanding the contribution of human movement to vector-borne disease dynamics day-to-day population movement and the management of dengue epidemics assessing and maximizing the acceptability of global positioning system device use for studying the role of human movement in dengue virus transmission in iquitos, peru using gps technology to quantify human mobility, dynamic contacts and infectious disease dynamics in a resource-poor urban environment understanding individual human mobility patterns the scaling laws of human travel peridomestic infection as a determining factor of dengue transmission nation-wide, web-based, geographic information system for the integrated surveillance and control of dengue fever in mexico a search space reduction methodology for data mining in large databases the use of a vest equipped with a global positioning system to assess water-contact patterns associated with schistosomiasis distribution and interspecies contact of feral swine and cattle on rangeland in south texas: implications for disease transmission health &demographic surveillance system profile: the kombewa health and demographic surveillance system (kombewa hdss) multiple outbreaks for the same pandemic: local transportation and social distancing explain the different "waves" of a-h1n1pdm cases observed in méxico during asymptomatic humans transmit dengue virus to mosquitoes house-to-house human movement drives denguevirus transmission epidemiology of dengue and dengue haemorrhagic fever in a cohort of adults living in bandung analyzing the spatio-temporal relationship between dengue vector larval density and land-use using factor analysis and spatial ring mapping population movement and vector-borne disease transmission: differentiating spatial-temporal diffusion patterns of commuting and noncommuting dengue cases recasting the theory of mosquito-borne pathogen transmission dynamics and control strengths and weaknesses of global positioning system (gps) data-loggers and semi-structured interviews for capturing fine-scale human mobility: findings from iquitos authors wish to thank to servicios de salud de morelos for its support to this project. the authors have declare that no competing interests exist. key: cord-128436-xndrlnav authors: granozio, fabio miletto title: comparative analysis of the diffusion of covid-19 infection in different countries date: 2020-03-18 journal: nan doi: nan sha: doc_id: 128436 cord_uid: xndrlnav the sudden spread of covid-19 outside china has pushed on march 11 the world health organization to acknowledge the ongoing outbreak as a pandemic. it is crucial in this phase to understand what should countries which presently lag behind in the spread of the infection learn from countries where the infection spread earlier. the choice of this work is to prefer timeliness to comprehensiveness. by adopting a purely empirical approach, we will limit ourselves to identifying different phases in the plots of different countries, based on their different functional behaviour, and to make a comparative analysis. the comparative analysis of the registered cases curves highlights remarkable similarities, especially among western countries, together with some minor but crucial differences. we highlight how timeliness can largely reduce the size of the individual national outbreaks, ultimately limiting the final death toll. our data suggest that western governments have not unfortunately shown the capability to anticipate their decisions, based on the experience of countries hit earlier by the outbreak. italy is presently the hardest hit country. its death toll seems bound to rapidly overcome the chinese case. other western countries follow the same route. it is crucial in this phase to understand what countries presently lagging behind in the spread of the infection can learn from countries where the infection spread earlier. the first question we address is: which are the relevant numerical signatures to be monitored to check how effectively a country is acting, compared to other countries, in containing the infection? here we show that an answer to this question can be given not relying on specific epidemiological expertise, but based on a simple numerical analysis of public data available on the internet. it is widely acknowledged that comparison of curves from different countries is made difficult by the different ways the detection of the virus is addressed. in particular, the ratios of the tested population to the total population, among the countries addressed in this paper, range from about 5.000/million in the case of south korea, to 1.400/million for italy to less the 70/million in the case of usa 1 . therefore, not only the real number of infected people might largely exceed the registered cases, but such ratio (registered/total cases) might change country by country. this difference is partly mitigated by two observations: -in the specific spirit of this work, we will compare countries in the same stage of the epidemic. considering the delay of the outbreaks (korea is grossly 10 days ahead of italy and 20 days ahead of the states, as will be discussed later in detail) italy is grossly following the same testing curve as korea while the usa lag behind by less than one order of magnitude. -we should assume that in advanced countries most symptomatic patients are counted among the registered cases within a few days. these are exactly the cases we mostly want to focus on. the hidden background of asymptomatic patients, though playing a role in determining the disease spread, is a less relevant datum in foreseeing the final death count. at the end of this analysis, remarkable similarities are found, especially among western countries, together with some minor differences. the extent and relevance of the observed similarities for the case of western countries will justify, ex post, the adopted approach. epidemiological curves are typically believed to follow the stochastic logistic model. nevertheless, this assumption cannot take into account situations in which the infected population reacts by drastic changes of its collective behaviour, thus changing the virus reproductive number, in the course of the outbreak. a general model for covid-19 diffusion would require knowledge of all the specific virus containment measurements adopted in each single country, their dates, and their quantitative effect on the reproductive number. this is so far beyond our present understanding. the choice of this work is to prefer timeliness to comprehensiveness. by adopting a purely empirical approach, we will limit ourselves to identifying different phases in the plots of different countries, based on their different functional behaviour, and to make a comparative analysis. for more complex approaches, the human data analysis time could easily exceed the obsolescence time of the dataset, which is of the order of a couple of days. the case of countries beyond the initial phase of the outbreak the source of data for this work is the csse covid-19 dataset 2 . we analyse here the data of three of the countries that registered at the date of march 15 the highest cumulative number of registered cases, i.e. china, italy, and south korea. we neglect iran, also because of the lack of information about the number of tested people. the three countries are at different stages of the outbreak. china exceeded the number of 1000 registered cases in the hubei province on january 24 and presently reports few new cases per day. korea exceeded the number of 1000 registered cases on february 27 and presently reports few new cases per day. italy exceeded the number of 1000 registered cases only a few days later but, in spite of a recent slowdown, the end of the exponential phase, if confirmed, is happening in the present days. the chinese plot shown in fig. 1a stops on february 11. this is due to the change of criterium adopted in the counting of infected patients performed in china on february 12 3 . the reported data are fortunately widely sufficient for the purpose of this work. as for italy, last data are aligning to a linear curve, which might well be the inflection point, anticipating a smooth transition to a sublinear behaviour. the absolute number of the new daily cases in the country, about 3.500 on march 15, is still very high. by comparing the linear coefficient of the italian and korean blue curves, we observe in fact that the former exceeds the later by a factor 7. the comparison of the plots shows that, in spite of the extremely fast growth rate ( =2.4d, corresponding to a doubling time of one day) the rapid response of the korean society allowed to switch the growth to a slower rate before reaching 500 registered infected people. this rapidity is confirmed by the observation that at the time when only 21 infections were found, on february 18, the republic of korea had already tested over 8000 citizens 4 . italy had instead 322 registered cases by the time it reached the same number of tests on february 25 5 . the perduring fast growth rate in italy rises major concerns and suggests that the cumulative final number of cases might exceed the chinese case. comparison between western countries. the italian case seems to correspond to the worst-case scenario among the ones analysed above. it suggests that while korea implemented a faster reaction than china, profiting of the lesson learned by the experience of the neighbour country, italy apparently showed a longer response time than both asian countries, in terms either of diagnosis, or of governmental decision, or else of change of individual habits. it is of great importance to verify how well the italian lesson was learned by other western countries. the plots reported in fig. 2a this timescale seems to be characteristic of the covid-19 "free expansion" in all these countries or, to state it more prudently, of the rate at which they are detected. we remind in this context that, in absence of large-scale screening programs, most infected people are tested after showing symptoms, i.e. about 5 to 14 days after infection. therefore, the registered cases curves map the history of the past behaviours of the infected population. we observe that, still at the date of march 15, the spanish evolution is correctly described by the red exponential curve. the us curve shows a minor deviation, which might well indicate a switch to phase #2. france has switched to the second phase, with exponent  =4.5d. germany has also switched to the second phase, with exponent  =4.2d. the plot in fig. 3 gathers all the curves above in the same plot, comparing them with the italian curve. among the possible ways to plot the data together, the most significant one, by far, was found to apply a relative shift in time, in order to "synchronize" the different starting times of the outbreak. we remark that, by normalizing the number of infected people to the overall population, the us plot would have been shifted backwards by 5-6 days. when plotted with the appropriate relative time scale (it reference, de, fr -9d, es -10d, us -11d), the data show how early or late the different countries deviated from the red exponential "phase #1" curve with  ~2.0d, d ~2.0d. the violet curve fitting the italian "phase #2" is also shown. it can be qualitatively deduced that france, germany and probably the united states, on this particular conventional time scale, have switched to the violet curve grossly at the same time as italy did. spain is potentially running towards a worse scenario, although the last points hint to a possible alignment to the same violet curve, albeit a factor two above. the plot in fig. 4 graphically highlights the importance of the early reactions. the numerical history of the outbreak in korea is compared to the hypothetical outbreak evolution (dotted lines) in case a two-day delay in the transition from the red to the violet curve, rigidly reflected in a twoday shift in the transition to the blue curve, would have taken place. according to the estimation in fig. 4 , the transition to phase #3 would have taken place on march 3 with an infected population of about 14.500 people, about 3,5 times higher than the actual number of registered cases at the real transition, happened on march 1 st . the same scale factor would be applied today, within our hypotheses, to the actual infected population. we attempted an elementary, real-time analysis of the covid-19 diffusion data updated at march 16. timeliness was preferred to comprehensiveness. important information has been extracted by the data, but major caution is needed in deriving general and far-reaching conclusions. both the inhomogeneity in data acquisition rate in different countries and the huge background of undetected, presumably asymptomatic, infected patients, are two major sources of uncertainty. the criterion applied in the plot in fig. 3 is highly instructive but, to some extent, arbitrary. with all due prudence related to the uncertainties above, we believe the present analysis is an excellent and timely starting point for further studies on the delayed effects on the curves of the response adopted by different countries. such response includes both individual changes of habits (hands hygiene, social distancing by own choice) and restrictions imposed by the governments (closing of schools, constraints to the mobility of citizens). the korean example clearly shows that early diagnosis of the first infected patients and timeliness in the response can largely reduce the size of the outbreak, ultimately limiting the final death toll. our data suggest that western governments have not shown the capability to anticipate their decisions based on the experience of countries hit earlier by the outbreak. our hope is that this work can contribute to triggering early and appropriate responses to the covid-19 pandemic. how many tests for covid-19 are being performed around the world? our world in data coronavirus cases: statistics and charts -worldometer quanti test per il coronavirus abbiamo fatto key: cord-025337-lkv75bgf authors: vakkuri, ville; kemell, kai-kristian; jantunen, marianna; abrahamsson, pekka title: “this is just a prototype”: how ethics are ignored in software startup-like environments date: 2020-05-06 journal: agile processes in software engineering and extreme programming doi: 10.1007/978-3-030-49392-9_13 sha: doc_id: 25337 cord_uid: lkv75bgf artificial intelligence (ai) solutions are becoming increasingly common in software development endeavors, and consequently exert a growing societal influence as well. due to their unique nature, ai based systems influence a wide range of stakeholders with or without their consent, and thus the development of these systems necessitates a higher degree of ethical consideration than is currently carried out in most cases. various practical examples of ai failures have also highlighted this need. however, there is only limited research on methods and tools for implementing ai ethics in software development, and we currently have little knowledge of the state of practice. in this study, we explore the state of the art in startup-like environments where majority of the ai software today gets developed. based on a multiple case study, we discuss the current state of practice and highlight issues. the cases underline the complete ignorance of ethical consideration in ai endeavors. we also outline existing good practices that can already support the implementation of ai ethics, such as documentation and error handling. ai systems have become increasingly common in software engineering projects [1] . while much of the media attention is on flashier systems such as autonomous vehicles, less high-profile ai systems such as decision-making support systems have become increasingly widespread in various organizations. ai systems often operate under the surface in the form of e.g. recommendation algorithms, making the high-profile systems in the middle of the media hype only the tip of the iceberg. over the last two decades, progress on ai has been accelerating rapidly. ai systems are now widely used in various areas and for various purposes. examples include medical systems [2] , law enforcement [3] , and manufacturing industries and industry 4.0 [4] , among numerous others. as the field progresses, the already impressive potential of ai systems becomes even larger, including applications such as general ai systems, the likes of which are already being developed by the technology giants such as alphabet. it is exactly because of this impressive potential and impact of these systems, especially in the future, that their potential negative impacts should also discussed more. ai systems are ultimately still software. they are affected by largely the same requirements as any other software system. ai development projects are still for the most part conventional software engineering, with machine learning related tasks only comprising a small portion of these projects [5] . however, ai systems are unique in terms of their effects on various stakeholders to the point where they can even exert society-wide influence. moreover, these stakeholders often have little power in opting out of using these systems. e.g. it is difficult to avoid having a firm filter your job application using ai or trying to avoid being monitored using ai-based surveillance systems if such systems are in place in the area. various system failures have already highlighted some of the potential issues these systems can have in practice. past incidents that have received global media coverage, even smaller incidents can be costly for the affected organization(s). for example, the national finnish broadcasting company, yle 1 , utilized ai for moderation purposes in its services. having already changed its processes to suit the automation of the moderation, the organization ultimately ran into problems with the ai moderator system. though the software was working fine on the technical level, the socio-ethical issues forced the organization to revert back to human moderators. many of these issues are ultimately rooted in ethics. ai ethics has thus become a new non-functional requirement to address; an -ility among the likes of quality, maintainability, and scalability. existing methods have focused on tackling these functional and non-functional requirements. however, no such methods currently exist for ai ethics [6] , with the existing tools and methods largely being technical and limited to narrow contexts in ml as opposed to being project-level methods. in the absence of methods, how are ethics currently implemented? much of the current literature in the area has been theoretical, and our understanding of the state of practice in ai ethics is currently lacking. [6] ai ethics literature discusses various aspects of ai ethics that should be taken into account, but bridging the gap between research and practice in the area remains an on-going challenge [7, 8] . guidelines for implementing ai ethics exist, but their effect on the start of practice remains unknown. thus, to begin bridging this gap in the area, we conduct an empirical study to help us understand the current state of practice. we do so by means of a multiple case study of three projects focusing on healthcare systems. the goal of this study is two-fold: (1) to help us understand the current state of practice in ai ethics; and (2) to discover existing good practices that might help in implementing ai ethics. out of these two goals, the first is a theoretical contribution while the second one is a practical one. the specific research question of the paper is as follows: rq: how are ai ethics taken into consideration in software engineering projects when they are not formally considered? ethics in software development and interactive systems design in general has a history of over 30 years. for example, bynum [9] introduced the idea of adapting human values in design before the rise of human computer interaction and other human-centric paradigms. theoretically grounded approaches such as value sensitive design (vsd) and its variants have provided tools to design technology that takes into account human values in the design process [10, 11] . as more progress is made in the field of ai systems, old theoretical scenarios in ai ethics are slowly becoming reality. this calls for new methods to manage the ethical issues arising from these new systems [7, 12] . indeed, vallach and allen [12] argue that ai and ai-based systems produce new requirements to consider. specifically, they propose that designers implicitly embed values in the technologies they create [12] . ai and other complex systems force designers to consider what kind of values are embedded in the technologies and also how the practical implementation of these values could be carried out and how these systems could be governed [13] . yet, little is currently known about software development practices and methods in the context of ai ethics, as empirical studies in the area are scarce. our results from an existing study suggest that ai ethics are seldom formally implemented in se projects, [14] . similarly, there are currently no project-level methods that could aid in implementing ai ethics [6] . on the other hand, various tools that can support specific elements of ai ethics do exist, such as tools for managing machine learning [6] . however, they do not help developers implement ai ethics in general. in this light, it can be said that ai ethics has hardly been incorporated into mainstream se literature yet. the reason why ai ethics has received little attention in the prior engineering literature is three-fold: 1) prior research has been predominantly philosophical, 2) the field has not sensed the need to address ethical concerns and 3) thus it has not been part of the education system. though some practice-focused research does exist (e.g. [15] ), most of the research on ai ethics has been conceptual and theoretical in nature. these studies have e.g. focused on defining ai ethics in a practical manner through various constructs in the form of values. for the time being, this discussion on defining ai ethics has come to center around four values: transparency [16, 17] , accountability [8, 16] , responsibility [16] , and fairness (e.g. [18] ). not all four of these values are universally agreed to form the core of ai ethics, however, as we discuss in the following section while presenting our research framework. following various real-life incidents out on the field (e.g. amazon's biased recruitment ai 2 ), ai ethics has also begun to spawn public discussion. this has led to governments, standardization institutions, and practitioner organizations reacting by producing their own demands and guidelines for involving ethics into ai development, with many standards and regulations in the works. countries such as france [19] and germany [20] have emphasized the role of ethics in ai, and on an international level the eu began to draft its own ai ethics guidelines which were presented in april 2019 [21] . moreover, iso has founded its own ethical, trustworthy ai in iso/iec jtc 1/sc 42 artificial intelligence subcommittee [22] . finally, some larger practitioner organizations have also presented their own guidelines concerning ethics in ai (e.g. google [23] and microsoft [24] guidelines). thus far, these various attempts to bring this on-going academic discussion out on the field have been primarily made in the form of guidelines and principles. out of these guidelines, perhaps the most prominent ones up until now have been the ieee guidelines for ethically aligned design (ead), born from the ieee global initiative on ethics of autonomous and intelligent systems alongside its ieee p7000™ standards working groups, which were branded under the concept of ead [8] . existing literature has shown us that guidelines and principles in the field of ict ethics do not seem to be effective. mittelstadt [25] argue that ai developers lack the professional norms and methods to translate principles into practice in successful way. to this end, mcnamara et al. [26] also argue based on empirical data that the acm ethical guidelines 3 had ultimately had very little impact on developers, who had not changed their ways of working at all. in this light, this is likely to be the case with the aforementioned ai ethics guidelines as well, as mittelstadt suggest [25] . this notion is further supported by morley et al. [6] who argue that developers focused on practicality are unlikely to adopt them when the competitive advantage of ead is unclear. to assist in the data collection and analysis in this study, we devised a research framework based on prominent literature in the area. this research framework and the justifications behind it are further discussed in an existing paper [27] (fig. 1 ). as the basis of the framework, we utilized the art principles of dignum [16] , which consist of accountability, responsibility, and transparency. these have been central constructs in the area, having also been featured in the ead guidelines of ieee. transparency is required for accountability and responsibility (line 1.c), as we must understand why the system acts in a certain fashion, as well as who made what decisions during development in order to establish accountability [17] . whereas accountability can be considered to be externally motivated, closely related but separate construct responsibility is internally motivated. the concept of accountability holds a key role in aiming to prevent misuse of ai and in supporting wellbeing through ai [8] . accountability refers to determining who is accountable or liable for the decisions made by the ai. dignum [16] in their recent works defines accountability to be the explanation and justification of one's decisions and one's actions to the relevant stakeholders. in the context of this research framework, accountability is used not only in the context of systems, but also in a more general sense. we consider, e.g., how various accountability issues (legal, social) were considered during development. dignum [16] defines responsibility as a chain of responsibility that links the actions of the systems to all the decisions made by the stakeholders. we consider it to be the least accurately defined part of the art model, and thus have taken a more comprehensive approach to it in our research framework. according to the ead guidelines, responsibility can be considered to be an attitude or a moral obligation for acting responsibly [8] a simplified way of approaching responsibility would be for a developer to ask oneself e.g. "would i be fine with using my own system?". in addition to the art principles, we utilized the three ai ethics categories presented by dignum [28] to make these constructs more practical. dignum suggests that ai ethics can be divided into: • ethics by design (integration of ethical reasoning capabilities as a part of the behaviour of artificial autonomous system, e.g. ethical robots); • ethics in design (the regulatory and engineering methods supporting ethical implications of ai systems); and • ethics for design: (codes of conduct, standards, and certification processes that ensure the integrity of developers and users) [28] . in this paper, we focus on the ethically aligned development process, and therefore the last two categories were included into the research framework. finally, aspects of commitment were utilized in the framework to aid data analysis. specifically, we utilized the commitment net model of abrahamsson [29] to approach the implementation of ethics into practice and have an explaining theoretical framework to examine ethics role to developers. from this model, we focused on concerns and actions. concerns were analyzed to understand what ethical issues were of interest to the developers. actions were then studied to understand how these concerns were actually tackled, or whether they were tackled at all. in commitment net model, actions are connected to concerns because when actions are taken, they are always driven from concerns [29] . on the other hand, however, concerns can exist without any actions taken to address them. the dynamic between actions and concerns was considered a tangible way to approach the focus of this study: practices for implementing ai ethics. developers actions could be likened to practices that were taking during the development. on the other hand, analyzing the concerns that developers have opens a view to understanding e.g. whether the developers perhaps wanted to implement ethics but were unable to do so. this section is split into three subsections. first, we discuss the cases of the case study. in the second and third ones we discuss the data collection and analysis, respectively. we conducted a multiple case study featuring three case projects. in all of the case projects, ai systems were being developed for the healthcare sector. these cases are outlined in the table below (table 1) . we chose to utilize a qualitative case study approach due to the exploratory nature of the topic, as the research area is novel as far as empirical studies are concerned. healthcare cases were selected due to the assumption that ethical consideration would be more common in healthcare-related projects due to the nature of the area in closely dealing with human well-being (e.g. the tradition of bio and medical ethics). indeed, healthcare systems can, for example, influence the decisions made by doctors or their patients related to the health of the patients. moreover, due to the emphasis on taxfunded public healthcare in finland, where the cases were from, the area is particularly regulated. these regulations impose some ethical requirements on software systems as well, especially in relation to handling patient data, which is considered particularly sensitive data from a legal point of view. in the paper title, we characterize these case projects as being startup-like because the projects shared various characteristics typically associated with software startups. first, agile methods were commonly utilized in the projects. secondly, the projects were all characterized by notable time pressure. thirdly, the projects operated with scarce resources. fourthly, the cases were centered around the development of functional prototypes, which were intended to as proof-of-concept type artifacts. however, the prototypes were being developed with real customers and tested in practice. finally, the projects exhibited exploratory approaches that focused on experimentation. currently, much of the on-going ai development is happening in startups [1] , even if the multinational organizations receive much media coverage in relation to ai. in characterizing them as startup-like, we consider them to be representative of the current ai development projects. data from the cases were collected using semi-structured interviews [30] . this interview strategy enabled the interviews to be conducted in a way that allowed for flexibility from the interview questions, but without steering too far from the topic. the interview instrument used in the interviews can be found externally as a reference 4 . all interviews were conducted as f2f interviews and the audio was recorded for transcription. the analysis was conducted using the transcripts. the interviews were conducted in finnish. this was done so that the respondents would not give shorter responses due to being uncomfortable with communicating in english, especially while being on record. the respondents from the cases were either developers or managers. as we wanted to focus on development practices and project issues, we focused on the personnel directly involved with the practical development issues in the projects. the respondents are outlined in the table in the previous subsection. in terms of experience, respondents 4, 5, 7, and 8 were junior developers. respondents 3 and 6, on the other hand, were senior developers. respondent 1 was a junior data scientist. we analyzed the data in two phases. first, we utilized a grounded theory (heath [31] ) inspired approach to code the transcripts quote by quote for each interview. this process was carried out iteratively as the list of codes was updated during the process. this approach was chosen due to the lack of existing studies on the current state of practice in the area. in the second phase, we utilized the commitment net model of abrahamsson [29] to then further analyze and categorize the coded content. we utilized the model by focusing on the concerns and actions of the developers. the concerns and actions of each respondent were compared across cases in search of recurring concerns and actions between cases and respondents. by evaluating the relationships between the actions taken in development the development process and the concerns of the developers, we could better understand the motivation behind the actions. similarly, we could also see which concerns did not lead to any actions, pointing to a lack of commitment towards tackling those concerns. the data were then compared with the research framework again to evaluate how ai ethics were implemented in each project. actions were the emphasis here, as the focus of this study was on tangible implementation of ai ethics and how it was carried out in terms of tools, practices, or methods. however, we also highlighted interesting findings in relation to the mere concerns related to ai ethics. this section is split into four subsections. the first three feature the analysis split between the accountability, responsibility and transparency constructs. the final subsection summarizes the analysis. we highlight our findings as primary empirical conclusions (pecs). during the analysis, we use quotes from the interviews to elaborate on the topic at hand. however, it should be noted that the conclusions are not drawn merely based on these individual citations. the concerns of the developers related to responsibility were varied, but ultimately detached from practice as far as concerns related to ai ethics were considered. the concerns the developers discussed in relation to responsibility were simply very practical concerns related to internal project matters or delivering a high quality product: "responsibility on reporting and keeping the project on schedule" (r6) pec1. developers feel most responsibility towards tackling problems related to software development, such as finding bugs, meeting project goals. on the other hand, as the interviews progressed, the developers did also express some concerns towards various ethical issues. however, these concerns were detached from their current work. they did not affect the way they worked, and the developers felt that these types of concerns were not relevant during development. the presence of concerns in the absence of actions to address those concerns pointed towards a lack of commitment on this front. "it is just a prototype" (r8) "i do my best" (r5) "but this is a prototype, an experiment, just to show people that you can do this type of thing. this doesn't really have any responsibility issues in it." (r1) pec2. on a personal level, developers are concerned about the ethical aspects of product development. however, little is done to tackle these concerns. furthermore, it was evident that in none of the cases had the hypothetical effects of the system on the stakeholders been discussed. to give a practical example, a system potentially affecting memory illness diagnoses clearly has various effects on its potential users, especially when the test can be taken without supervision. yet, the developers of this particular tool also felt that their users would not be curious about the workings of the system. they considered it sufficient if the responsibility was outsourced to the user and it was underlined that the system does not make the diagnosis but simply advises doctors. the developers did not consider the potential harm of the system past the tangible, physical harm potential of the systems. for example, stress or other negative effects on users and other stakeholders were not considered. in all three cases, the respondents did not consider the system to have had any potential of causing physical harm, and thus did not consider the system to have any notable harm potential at all. "nobody wants to listen to ethics-related technical stuff. no five hour lectures about it. it's not relevant to the users" (r5) "i don't really understand what it [responsibility] has to do with product development. we developers are all responsible." (r7) "what could it affect… the distribution of funds in a region, or it could result in a school taking useless action… it does have its own risks, but no one is going to die because of it" (r1) pec3. responsibility of developers is unclear. case a highlighted the potential importance of mathematical expertise. the team had internal mathematical capabilities that allowed them to develop their own algorithms, as well as to better understand third party components, in order to have achieve a higher standard of transparency. they utilized algorithms they were familiar with and which they understood on an in-depth level. thus, the team considered themselves to be able to understand why the system made certain decisions in certain situations. this underlines the importance of mathematical skills in preventing the birth of black boxes in ai development. "in that sense it's not really a black box as we can understand what's going on in there just fine, and we can show the nodes and what affects them. it's a very transparent algorithm." (r3) the other two cases utilized existing ai solutions. they did not have an in-depth understanding of the technologies they were utilizing, which resulted in their systems being (partially) black boxes. they understood any components created by the team but did not have a full understanding of the third party components they had used as a base. this presents problems for feature traceability. even though transparency of algorithms and data was not present in two of the cases, the developers in case b nonetheless acknowledged its potential importance however, as it was not considered a formal requirement in the projects, the managers did not devote resources towards pursuing it. even in case a, transparency was not produced as a result of ethical goals but out of business reasons. "we have talked about the risks of decision-making support systems but it doesn't really affect what we do" (r5) pec5. developers recognize transparency as a goal, but it is not formally pursued. on the other hand, in relation to transparency of systems development, all three cases displayed transparency. by having formal decision-making strategies, they were able to keep track of higher-level decisions related to the system. through proper documentation, they were able to keep track of decisions made on the code level. version control also assisted in this regard, making it clear who made what changes and when in retrospect. there were thus various existing practices that produced transparency of systems development. two of the cases also acknowledged the effects of team size on transparency of systems development. they noted that, in addition to documentation practices, the small team size itself made it easy to keep track of the actions of individual developers even in an ad hoc manner. established se practices, such as code documentation and code review, support transparency of systems development. some aspects of accountability were clear points of focus in the projects, namely ones related to security in terms of general information security as well as data management. the respondents were aware of being in possession of personal data, given that they developed healthcare solutions, and were concerned with keeping it secure. they mentioned taking measures to keep the data secure from potentially malicious actors, and they were aware that they would have to take measures to act in accordance with laws and regulations in the area. however, in some cases they had not done so yet. "it's really important how you handle any kind of data, that you preserve it correctly, among researchers, and don't hand it out to any government actors. for example, many of the data packages have kind of interesting data and it can't get into the wrong hands. i personally can't see any way to harm anyone with the data we have though" (r2). "we haven't really paid much attention to the [data] safety aspects yet… it hasn't really been a main focus. there's probably a lot of things we have to take into account [eventually]" (r5). the ethical concerns they had in relation to accountability were in general largely related to existing areas of focus in software development. for example, error handling was one aspect of accountability the respondents were particularly concerned with. this was tied with their goal of making quality software, which they considered their responsibility as professionals. the respondents could, to this end, discuss what tangible practices they utilized to deal with error handling. pec7. developers feel accountable for error handling and have the means to deal with it. however, error handling was largely considered from the point of view of writing code and testing it in a laboratory setting. i.e. the system was considered error free if there were no red lines in the code in the ide during development. only case company b discussed measures they had taken to monitor errors in use. furthermore, potential misuse (e.g. a prankster drawing a horizontal white line on the pavement to intentionally confuse autonomous vehicles) and error scenarios during the operational life of the system had not been actively considered in any of the case projects. "the calculations are made in the algorithms, so it doesn't really make mistakes" (r2) pec8. product misuse and error scenarios are only considered during development. they are not considered in terms of the future operational life of the system out on the field. due to the nature of machine learning, ai systems learn as they are taught with new data or as they collect it themselves while operating out on the field. from this arises the potential issue of unexpected behavior as a result of machine learning. none of the respondents had made plans to tackle potential unexpected behavior during the operational life of their system, should such behavior arise. in only one of the projects was the possibility directly acknowledged: "we just put it up for end-users to test and note that this is still being developed" (r7). pec9. developers do not have plans to deal with unexpected behavior of the system resulting from e.g. machine learning or the future expansion of the use context of the system. past the art constructs, we highlight some commonalities between the cases on a more general level while summarizing our findings. in none of the cases were ethics implemented by following a formal method or tool, nor were ethical issues considered directly as ethical issues. rather, any ethical issues tackled in the projects were tackled for practical reasons (e.g. error free software is beneficial from the point of view of customer relations). nonetheless, some of the ethical issues such as error handling and transparency of systems development were tackled in a systematic manner through existing software engineering practices such as code documentation and version control. on the other hand, though ethics were not taken into consideration on a project level, the respondents still exhibited some concern towards the potential socio-ethical issues in the systems. when prompted, they were able to come up with various negative effects the systems could have on different stakeholders. they considered these to be potential real issues, but did not have a way to address these concerns in the absence of tools, practices, and methods for doing so. moreover, they seemed to realize these potential issues only after being directly asked about them in the interviews. this also points to a lack of tools to aid in ethical analyses. in this section, we have collected all the primary empirical conclusions (pec) outlined in preceding analysis section into table 2 . we relate each of these findings to existing literature and discuss their implications in this section. we classify each of these pecs based on their contribution into either novel findings, findings that (empirically) validated existing literature, or findings that contradict existing literature. many of our findings underline a gap between research and practice in the area. whereas research on ai ethics alongside various guidelines devised by researchers [8] and practitioners [23, 24] alike has discussed various ethical goals for ai systems, these goals have not been widely adopted out on the field. in this sense, we consider some of our findings (pecs 4, 5, 8, and 9) to contradict existing literature. for example, extant literature has highlighted the importance of transparency of algorithms and data [15] [16] [17] . without understanding how the system works, it is impossible to establish why it malfunctioned in a certain situation, which may e.g. be pivotal in understanding the causes of an accident that resulted in material damage [15] . our findings point towards transparency being largely ignored as a goal (pec5). existing system components are utilized as black boxes, and developers do not see this as a notable problem (pec4). we consider pec5 to contradict existing literature in that existing literature has, on multiple occasions, highlighted the importance of transparency in ai systems. yet, out on the field, this importance does not seem to be recognized to the point where it would result in changing development practices. the situation is similar for tackling potential misuse of the systems, error handling during system operations, and handling unexpected system behavior (pec8-9). these goals are included into the ieee ead guidelines [8] . however, none of the case companies took any measures to address these potential issues. on a further note of transparency, however, the lack of emphasis placed on it is also curious in relation to feature traceability in se. for decades, understanding the inner workings of the system was considered key in any se endeavor. yet, in the context of ai systems, the long-standing goal of feature traceability seems to be waning. our findings point towards this being at least partially a result of a lack of mathematical understanding, as the one case company that considered their system to be fully transparent also noted that they fully understood the mathematics behind the algorithms they utilized. in using existing components in their systems, developers may not always understand the algorithms in these components. indeed, in this vein, [32] noted that simply seeing the code is not enough if the algorithm is not understood, or the system is not understood as a whole. though we discovered various examples of ethics not being implemented, we also discovered that various existing and established se practices can be used to implement ai ethics. documentation, version control, and project management practices such as meeting transcripts produce transparency of systems development by tracking actions and decision-making (pec6). similarly, software quality practices help in error handling also in the context of ai ethics (pec7), although they do not specifically account for the errors autonomous systems may face while operating out on the field. while discussing responsibility with the respondents, we also discovered that most of their responsibility was related to producing quality software and meeting project requirements. this validates existing literature in the area of spi (e.g. unterkalmsteiner, [33] ). notably, we also discovered that the developers had ethical concerns towards their systems, which is a novel finding in this context (pec2). little is currently known about the state of practice out on the field, although a recent version of the ead guidelines speculated about a gap in the area, which our findings support in relation to most aspects of ai ethics. despite ai ethics largely not being implemented, our findings point towards it partially being a result of a lack of formal methods and tools to implement it. in our data, the reason given by multiple respondents for not actively considering ethical issues was that they were developing a prototype. however, prototypes do influence the final product or service developed based by them, as shown by existing studies [34] . ai ethical issues should be tackled during earlier stages of development as well, seeing as many of them are higher-level design decisions (such as how to carry out machine learning in the system [15] ), which can be difficult to undo later. following this study, as well as a past case study [14] , we suggest that future research seek to tackle the lack of methods and tooling in the area. though developers may be concerned about ethical issues, they lack the means to address these concerns. on the other hand, methods can also raise the awareness of developers in relation to ai ethics, creating concerns where there now are none. in creating these methods, we suggest exploring existing practices that can be used as is or tailored to implement ai ethics, as we have discussed here. given the amount of activity in ai ethics currently, with many governmental actors drafting their own ai ethics guidelines, likely followed by regulations, methods and tools will likely have practical demand in the future. thus, even if one barrier to implementing ai ethics is currently the fact that it is seldom considered a requirement on a project level, regulations and laws can force organizations to take ethics into account. this would inevitably result in a demand for methods in this area, as well as the birth of various in-house ones. finally, in terms of limitations, the most notable limitations of the study stem from the data and the research approach. the qualitative multiple case study approach always poses problems for the generalizability of the data. we acknowledge this as a limitation, although we also refer to eisenhardt [35] in arguing in favor of qualitative case studies, especially in the case of novel research areas. ai ethics, as far as empirical data goes, is a novel area of research. moreover, the multiple case study approach adds some further validity to the data, as we do not base our arguments on a single case. nonetheless, another limitation in the data is also that all the cases were based on finland. for example, the implementation of ai ethics can be more of a focus in us-based companies, as much of the current discussion on ai ethics also originates from the us. one other limitation in the data is that the interviews were conducted in finnish. the constructs such as transparency may not carry the same connotations in finnish as they do in english. this is especially the case with accountability and responsibility, which may not translate in a straightforward manner. however, during the interviews, we sought to clear any misunderstandings related to the constructs with the respondents. the research framework can also be argued to be a limitation. as ai ethics is a currently active field in terms of theoretical discussion, the constructs in the area are constantly evolving. the art principles and ead chosen as a basis of the framework were, at the time of writing, some of the most prominent works in the area. the framework ultimately presents but one way of perceiving ai ethics. this paper furthers our understanding of the current state of practice in the field of ai ethics. by means of a multiple case study, we studied the way ai ethics is currently implemented in practice, if it is implemented at all, when it is not formally or systematically implemented in software engineering projects. our findings can be summarized through the following two key takeaways: • even when ethics are not particularly considered, some currently commonly used software development practices, such as documentation, support ead. this is also the case with focusing on information security. • while the developers speculate potential socioethical impacts of the resulting system, they do not have means to address them. thus, from the point of view of software engineering methods and practices, this highlights a gap in the area. while some of the existing common practices support the implementation of some aspects of ai ethics, there are no methods or practices that help implement it on a project-level. further studies on the topic should seek to assist in the practical implementation of ai ethics. singular practices and especially project-level methods are needed to bridge the gap between research and practice in the area. this lack of higher-level methods was also highlighted in a review of tools and methods in the area [6] . open access this chapter is licensed under the terms of the creative commons attribution 4.0 international license (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license and indicate if changes were made. the images or other third party material in this chapter are included in the chapter's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the chapter's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. ai 50: america's most promising artificial intelligence companies artificial intelligence in medicine artificial intelligence for law enforcement: challenges and opportunities industrial artificial intelligence for industry 4.0-based manufacturing systems hidden technical debt in machine learning systems from what to how: an initial review of publicly available ai ethics tools, methods and research to translate principles in-to practices towards moral autonomous systems ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems, first edition flourishing ethics value-sensitive design value sensitive design: applications, adaptations, and critiques. in: van den hoven why machine ethics? incorporating ethics into artificial intelligence ethically aligned design of autonomous systems: industry viewpoint and an empirical study stop explaining black box machine learning models for high stakes decisions and use interpretable models instead the ethics of information transparency false positives, false negatives, and false analyses: a rejoinder to "machine bias: there's software used across the country to predict future criminals, and it's biased against blacks for a meaningful artificial intelligence: towards a french and european strategy german federal ministry of transport and digital infrastructure: automated and con-nected driving ethics guidelines for trustworthy ai iso/iec jtc 1/sc 42 artificial intelligence responsible bots: 10 guidelines for developers of conversational ai principles alone cannot guarantee ethical ai does acm's code of ethics change ethical decision making in software development? ai ethics in industry: a research framework ethics in artificial intelligence: introduction to the special issue commitment nets in software process improvement mastering the semi-structured interview and beyond: from research design to analysis and publication developing a grounded theory approach: a comparison of glaser and strauss seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability evaluation and measurement of software process improvementa systematic literature review minimum viable product or multiple facet product? the role of mvp in software startups building theories from case study research. acad key: cord-025886-259357pg authors: mehrotra, sanjay; rahimian, hamed; barah, masoud; luo, fengqiao; schantz, karolina title: a model of supply‐chain decisions for resource sharing with an application to ventilator allocation to combat covid‐19 date: 2020-05-02 journal: nan doi: 10.1002/nav.21905 sha: doc_id: 25886 cord_uid: 259357pg we present a stochastic optimization model for allocating and sharing a critical resource in the case of a pandemic. the demand for different entities peaks at different times, and an initial inventory for a central agency are to be allocated. the entities (states) may share the critical resource with a different state under a risk‐averse condition. the model is applied to study the allocation of ventilator inventory in the covid‐19 pandemic by fema to different u.s. states. findings suggest that if less than 60% of the ventilator inventory is available for non‐covid‐19 patients, fema's stockpile of 20 000 ventilators (as of march 23, 2020) would be nearly adequate to meet the projected needs in slightly above average demand scenarios. however, when more than 75% of the available ventilator inventory must be reserved for non‐covid‐19 patients, various degrees of shortfall are expected. in a severe case, where the demand is concentrated in the top‐most quartile of the forecast confidence interval and states are not willing to share their stockpile of ventilators, the total shortfall over the planning horizon (until may 31, 2020) is about 232 000 ventilator days, with a peak shortfall of 17 200 ventilators on april 19, 2020. results are also reported for a worst‐case where the demand is at the upper limit of the 95% confidence interval. an important finding of this study is that a central agency (fema) can act as a coordinator for sharing critical resources that are in short supply over time to add efficiency in the system. moreover, through properly managing risk‐aversion of different entities (states) additional efficiency can be gained. an additional implication is that ramping up production early in the planning cycle allows to reduce shortfall significantly. an optimal timing of this production ramp‐up consideration can be based on a cost‐benefit analysis. while approximately 80% of covid-19 cases are mild, the most severe cases of covid-19 can result in respiratory failure, with approximately 5% of patients requiring treatment in an intensive care unit (icu) with mechanical ventilation (wu & mcgoogan, 2020) . mechanical ventilation is used to save the lives of patients whose lungs are so damaged that they can no longer pump enough oxygen into the blood to sustain organ function. it provides more oxygen than can be delivered through a nasal cannula or face mask, allowing the patient's lungs time to recover and fight off the infection. physicians in italy have indicated that critical covid-19 patients often need to be intubated for a prolonged period of time (15-20 days) (rosenbaum, 2020) , further exacerbating ventilator scarcity. limiting the death toll within the united states depends on the ability to allocate sufficient numbers of ventilators to hard hit areas of the country before infections peak and ensuring that the inventory does not run out. harder hit states (such as new york, michigan, and louisiana) are desperately trying to acquire additional ventilators in anticipation of significant shortages in the near future. yet in the absence of a coordinated federal response, reports have emerged of states finding themselves forced to compete with each other in order to obtain ventilators from manufacturers (state health systems, 2020) . according to new york's governor cuomo, the state has ordered 17 000 ventilators at the cost of $25 000/ventilator, but is expected to receive only 2500 over the next 2 weeks (ny governor, 2020). as of march 31, 2020, according to the u.s. presidential news briefing, more than 8100 ventilators have been allocated by fema around the nation. of these, 400 ventilators have been allocated to michigan, 300 to new jersey, 150 to louisiana, 50 to connecticut, and 450 to illinois, in addition to the 4400 given to new york (march 31 white house briefing, 2020) . going forward, the federal response to the covid-19 pandemic will require centralized decision-making around how to equitably allocate, and reallocate, limited supplies of ventilators to states in need. projections from the institute for health metrics and evaluation at the university of washington, which assume that all states will institute strict social distancing practices and maintain them until after infections peak, show states will hit their peak demand at different time points throughout the months of april and may. many states are predicted to experience a significant gap in icu capacity, and similar, if not greater, gaps in ventilator capacity, with the time point at which needs will begin to exceed current capacity varying by state (ihme, 2020). in response to the above problem, this paper presents a model for allocation and possible reallocation of ventilators that are available in the national stockpile. importantly, computational results from the model also provide estimates of the shortfall of ventilators in each state under different future demand scenarios. this modeling framework can be used to develop master plans that will allocate part of the ventilator inventory here-and-now, while allocating and reallocating the available ventilators in the future. the modeling framework incorporates conditions under which part of the historically available ventilator inventory is used for non-covid-19 patients, who also present themselves for treatment along with covid-19 patients. thus, only a fraction of the historical ventilator inventory is available to treat covid-19 patients. the remaining demand needs are met by allocation and re-allocation of available ventilators from fema and availability of additional ventilators through planned production. fema is assumed as the central agency that coordinates state-to-state ventilator sharing. the availability of inventory from a state for re-allocation incorporates a certain risk-aversion parameter. we present results while performing a what-if analysis under realistically generated demand scenarios using available ventilator demand data and ventilator availability data for different u.s. states. an online planning tool is also developed and made available for use at https://covid-19.iems.northwestern.edu (covid-19 planning tool, 2020). this paper is organized as follows. a review of the related literature is provided in section 2. we present our resource allocation planning model, and its re-formulation in section 3. section 4 presents our computational results under different mechanical ventilator demand scenarios for the covid-19 pandemic in the united states. in section 5, we introduce our companion online covid-19 ventilator allocation and sharing planning tool. we end the paper with some discussion and concluding remarks. a medical resource allocation problem in a disaster is considered in xiang and zhang (2016) . victims' deteriorating health conditions are modeled as a markov chain, and the resources are allocated to optimize the total expected health recovery rate and reduce the total waiting time. certain illustrative examples in a queuing network setting are also given in xiang and zhang (2016) . the problem of scarce medical resource allocation after a natural disaster using a discrete event simulation approach is investigated in cao and huang (2012) . specifically, the authors in cao and huang (2012) investigate four resource-rationing principles: first come-first served, random, most serious first, and least serious first. it is found that without ethical constraints, the least serious first principle exhibits the highest efficiency. however, a random selection provides a relatively fairer allocation of services and a better trade-off with ethical considerations. resource allocation in an emergency department in a multiobjective and simulation-optimization framework is studied in feng, wu, and chen (2017) . simulation and queuing models for bed allocation are studied in vasilakis and ei-darzi (2001) and gorunescu, mcclean, and millard (2002) . the problem of determining the levels of contact tracing to control spread of infectious disease using a simulation approach to a social network model is considered in armbruster and brandeau (2007) . a linear programming model is used in investigating the allocation of hiv prevention funds across states (earnshaw, hicks, richter, & honeycut, 2007) . this paper suggests that in the optimal allocation, the funds are not distributed in an equitable manner. a linear programming model to derive an optimal allocation of healthcare resources in developing countries is studied in flessa (2000) . differential equation-based systems modeling approach is used in araz, galvani, and meyers (2012) to find a geographic and demographic dependent way of distributing pandemic influenza vaccines based on a case study of a/h1n1 pandemic. in a more recent covid-19-related study, kaplan (2020) proposes a probability model to estimate the effectiveness of quarantine and isolation on controlling the spread of covid-19. in the context of ventilator allocation, a conceptual framework for allocating ventilators in a public emergency is proposed in zaza et al. (2016) . the problem of estimating mechanical ventilator demand in the united states during an influenza pandemic was considered in meltzer, patel, ajao, nystrom, and koonin (2015) . in a high severity pandemic scenario, a need of 35 000-60 500 additional ventilators to avert 178 000-308 000 deaths was estimated. robust models for emergency staff deployment in the event of a flu pandemic were studied in bienstock and zenteno (2015) . specifically, the authors focused on managing critical staff levels during such an event, with the goal of minimizing the impact of the pandemic. effectiveness of the approach was demonstrated through experiments using realistic data. a method for optimizing stockpiles of mechanical ventilators, which are critical for treating hospitalized influenza patients in respiratory failure, is introduced in huang et al. (2017) . in a case-study, mild, moderate, and severe pandemic conditions are considered for the state of texas. optimal allocations prioritize local over central storage, even though the latter can be deployed adaptively, on the basis of real-time needs. similar to this paper, the model in huang et al. (2017) uses an expected shortfall of ventilators in the objective function, while also considering a second criteria of total cost of ventilator stockpiling. however, the model in huang et al. (2017) does not consider distribution of ventilators over time. in the case of covid-19, the ventilator demand is expected to peak at different times in different states, as the demand for each state has different trajectories. only forecasts are available on how the demand might evolve in the future. in this paper, we assume that the planning horizon is finite, and for simplicity we assume that reallocation decisions will be made at discrete times (days). under certain demand conditions, the ventilators may be in short supply to be able to meet the demand. our model is formulated as a stochastic program, and for the purpose of this paper, we reformulate and solve the developed model in its extensive form. we refer the reader to birge and louveaux (2011) and shapiro, dentcheva, and ruszczyński (2014) for a general description of this topic. in this section, we present a multiperiod planning model to allocate ventilators to different regions, based on their needs, for the treatment of critical patients. we assume that the demand for ventilators at each planning period is stochastic. we further assume that there is a central agency that coordinates the ventilator (re)location decisions. the ventilators' (re)location is executed at the beginning of each time period. once these decisions are made and executed, the states can use their inventory to treat patients. both the federal agency and the states have to decide whether to reserve their inventory in anticipation of future demand or share it with other entities. before presenting the formulation, we list the sets, parameters, and decision variables that are used in the model. - : states (regions), indexed by n ∈  ≔{1, … , | |}, - : planning periods, indexed by t ∈  ≔{1, … , | |}, -ω: ventilators' demand scenarios, indexed by ∈ ω ≔ {1, … , |ω|}, • deterministic parameters -y n : the initial inventory of ventilators in region n ∈  at time period t = 0, -i: the initial inventory of ventilators in the central agency at the beginning of time period t = 0, -q t : the number of ventilators produced during the time period t − 1 that become available at the beginning of time period t ∈  , for t ≥ 1, n : the percentage of the initial inventory of ventilators in region n ∈  that cannot be used to care for the patients at the critical level, n : the percentage of the initial inventory of ventilators in region n ∈  that the region is willing to share with other regions, among those that can be used to care for patients at the critical level, n : the risk-aversion of region n ∈  to send their idle ventilators to the central agency to be shared with other regions, • stochastic parameters d n,t : the number of patients in region n ∈  at the critical level that need a ventilator at the beginning of time period t ∈  under scenario ∈ ω, p : probability of scenario ∈ ω, • decision variables x n, t : the number of ventilators reallocated to region n ∈  by the central agency at the beginning of time period t ∈  , z n,t : the number of ventilators reallocated to the central agency by region n ∈  at the beginning of time period t ∈  under scenario ∈ ω, y n,t : the number of ventilators at region n ∈  that can be used to care for the patients at the critical level at the end of time period t ∈ {0} ∪  under scenario ∈ ω, s t : the number of ventilators at the central agency at the end of time period t ∈ {0} ∪  under scenario ∈ ω. for notational convenience, we also define the vector we might drop the superscript ∈ ω from this notation and use the same symbol with a tilde to denote that these parameters are stochastic. for example, we might used. similarly, we define the deciin this section, we assume that there is no lead time between sending a ventilator by an entity (a region or the central agency) and delivery by another entity. with this assumption, the planning model to minimize the expected shortage of ventilators in order to treat patients at the critical level is formulated as a two-stage stochastic program as follows: where we now explain the model in detail. in the first stage, the central agency makes the "here-and-now" decisions x before the stochastic parametersd are realized. as captured in (1a), the goal of the central agency is to minimize the expected total shortage of ventilators over all time periods t ∈  and all regions n ∈  . the objective also includes a cost, parameterized by of allocating a ventilator by the central agency to a state at a given time. this cost can be set to zero, or set to a small value. in our computations we set = 0.01. in the second stage, once the stochastic parametersd are realized, the "wait-and-see" decisions z n, t , y n, t , s t , n ∈  , and t ∈  , are made. these decisions are scenario-specific, and are indicated by the superscript ∈ ω, in the extensive formulation given in (3). constraints (2b) and (2c) ensure the conservation of ventilators for the regions and the central agency at each time period, respectively. constraint (2d) enforces that at each time period, a region is not sending out any ventilator to the central agency if its in-hand inventory is lower than its safety stock, where the safety stock is determined as ndn,t , for t ∈  and n ∈  . constraint (2e) ensures that at each time period, the total number of outgoing ventilators from the central agency to the regions cannot be larger than the available inventory, after incorporating the newly produced ventilators and the incoming ones from other regions. constraints (2f) and (2g) set the initial inventory at the regions and central agency, respectively. the remaining constraints ensure the nonnegativity of decision variables. note that the objective function (2a) and constraints (2d) are not linear. by introducing an additional variable, the term (d n,t − y n,t ) + in the objective function, for n ∈  , t ∈  , and ∈ ω, can be linearized as e n,t ≥d n,t − y n,t , e n,t ≥ 0. furthermore, for each region n ∈  and time period t ∈  , constraint (2d) can be linearized as where m is a big number. by incorporating the finiteness of the support ofd, a linearized reformulation of model (1) can be written as a mixed-binary linear program in the following extensive form: where d n,t denotes the number of patients at the critical level in regions n ∈  that need a ventilator at the beginning of time period t ∈  under scenario ∈ ω. note that all second-stage variables z n,t , y n,t , and s t , n ∈  , and t ∈  in model (3) have superscript to indicate their dependence to scenario ∈ ω. it is worth noting that (1) (and (3) as well) considers multiperiod decisions. in the model, the central agency will make decisions for the entire planning horizon using the information that is available at the beginning of planning. for our numerical experiments in section 4, we used a commercial mixed-integer programming solver to obtain the results. furthermore, we used i + n y n,0 + ∑ t ′ ≤t q t as a big-m for n ∈  and t ∈  . in this section, we assume that there is a lead time of l time periods between sending a ventilator by an entity (a region or the central agency) and delivery by another entity. with this assumption, (1) can be generalized as follows: where note that model (4) is obtained by revisiting constraints (2b), (2c), and (2e) to incorporate lead time in the planning. constraints (5b) and (5c) require the conservation of ventilators for the regions at each time period, where a ventilator sent by the federal agency to a region at time period t − l, t > l, will become available for the region at time period t. constraints (5d) and (5e) ensure the conservation of ventilators for the central agency, respectively, where a ventilator sent by a region to the federal agency at time period t − l, t > l, will become available for the central agency at time period t. constraints (5g) and (5h) enforce that the total number of outgoing ventilators from the central agency to the regions cannot be larger than the available inventory, after incorporating the newly produced ventilators and the incoming ones from other regions. similar to (2b) and (2c), constraint (2e) is also divided into sets for t ≤ l and t > l in (5g) and (5h). by incorporating the finiteness of the support ofd, a linearized reformulation of model (1) can be written as a mixed-binary linear program in the following extensive form: similar to (3), model (4) can be written as a mixed-binary linear program in the following extensive form: y n,t−1 + x n,t−l − z n,t = y n,t , ∀ ∈ ω, ∀n ∈  , z n,t ≤ y n,t − (1 − )y n,0 − n d n,t + m(1 − g n,t ), z n,t ≤ n,t , ∀ ∈ ω, ∀n ∈  , ∀t ∈  , e n,t ≥ d n,t − y n,t , ∀ ∈ ω, ∀n ∈  , ∀t ∈  , (6m) the ventilator allocation model (3), described in section 3, was implemented in python 3.7. all computations were performed using gurobi 9.0.1, on a linux ubuntu environment on two machines. in the first machine, we used 14 cores, with 3.4 ghz processor and 128 gb of ram, and set the time limit to 2 hours. in the second machine, we used 64 cores, with 2.2 ghz processor and 128 gb of ram, and set the time limit to 3 hours. since projected ventilator need is a key input for the model, it is important to use accurate estimates of the demand forecasts. the forecasts of ventilator needs generated by ihme (2020) were used in our computational study. these forecasts were first made available on march 26, 2020, and used the most recent epidemiological data and advanced modeling techniques. the available information closely tracks the real-time data (ihme covid-19 projections, 2020 we considered a 70-day planning period, starting from march 23, 2020 and ending on may 31, 2020. we generated the random demands in ways that correspond to projected future demands under different mitigation effects. more precisely, we considered six different cases to generate random samples for the number of ventilators needed to care for covid-19 patients. these cases are listed below: case i. average-i: each of the demand scenarios has equal probability and the distribution is uniform over the range of the ci provided in ihme (2020) we further discuss the demand generation procedure. a demand scenario contains the demand data for all days and states. in all cases i-vi, we assumed that the forecast ci provided in ihme (2020), for each day and for each state, represents the support of the demand distribution. cases i and ii are generated to develop average demand scenario representations that use the information provided in the ci given in ihme (2020) in two different ways. in case i, it is assumed that the mean is the median of the demand distribution (ie, the right-and left-tail of the demand distribution have 0.5 probability). we randomly generated a number to indicate which tail to sample from, where both tails have the same 0.5 probability of being chosen. once the tail is determined, we divided the tail into 50 equally distanced partitions, and chose a random partition to uniformly sample from. we repeated this process for all days and states. in order to capture the spatiotemporal correlation between demand realizations, we sampled from the same tail and partition for all days and states, although the range from which we sample depends on the ci. in this case, all scenarios are equally likely. in case ii, we randomly generated a number to indicate which tail to sample from, where the top 25% of the ci (ie, the right tail) has a 0.25 probability and the bottom 75% (ie, the left tail) has a 0.75 probability of being chosen. if the right tail is chosen, we set the weight of the scenario to 0.25, and we set it to 0.75 otherwise. the rest of the procedure is similar to case i. in order to determine the probability of scenarios, we normalized the weights. demand scenarios in cases iii-v are generated in the same fashion as in case ii, where the only difference is in the probability of which tail to choose from, which is determined by the sampling scheme described in the definition of the case. cases i and ii are intended to parameterize the model to capture the average case. these two cases were considered because our data analysis showed that the confidence intervals on the forecasts provided by ihme covid-19 projections (2020) were not symmetric. we attempted a log-transformation of the confidence intervals, but found that the log-transforms also provided asymmetric confidence intervals. hence, it was considered more appropriate to generate the demand scenarios using two different sampling schemes. for cases i-vi, we generated 24 scenarios, while in case vi, there is only one scenario which happens at the upper limit of ci. note that in each case, different quantities for the random demandd n,t , t ∈  , n ∈  , and ∈ ω, might be generated. an illustration of the trajectory of demand scenarios over time is given in figures 1-3 for the us and the states of new york and california. the y-axis in these figures represents the demand realization in each sampled scenario. another key input to the planning model is the initial ventilator inventory. as of march 23, 2020, before the rapid rise of covid-19 cases in the state of new york, fema had about 20 000 ventilators in reserve, that is, i = 20 000. we used this for our model which suggests ventilator allocation decisions from march 23, 2020. estimates for the initial inventory of ventilators at different states were obtained from mapping us health system (2020). these estimates are based on a hospital survey (rubinson et al., 2010; united states resource, 2020) . the estimates for new ventilator production were obtained based on information provided at the us presidential briefings on march 27, 2020 (coronavirus outbreak, 2020 . these estimates suggest that the normal yearly ventilator production capacity is about 30 000 ventilators/year. however, under the u.s. defense production act, with the participation of additional companies, production of approximately 10 000 ventilators/month could be possible (coronavirus outbreak, 2020) . using this information, for the baseline case we assumed that the current daily ventilator production rate is q t = 80 ventilators/day; and it will be increased to q t = 320 ventilators/day starting on april 15, 2020. we refer to this case, as "baseline production," and analyzed in section 4.5.1. we also analyze the case that the ramp-up in production happens on april 1, 2020 or april 7, 2020 in section 4.5.2. recall that in the model, parameter is used to indicate the fraction of ventilators used to care for non-covid-19 patients. additionally, a parameter is used in the model to estimate a state's willingness to share the fraction of their initial covid-19-use ventilators. similarly, the parameter is used to control the state's risk-aversion to sending their idle ventilators to fema for use in a different state. we suppose that for all states n, n ∈  , we have n = , n = , and n = . in order to systematically study the ventilator allocations and shortfalls, we used the following parameters: ∈ {50%, 60%, 75%}, ∈ {1.25, 1.5, 3}, and ∈ {0%, 10%, 25%}. in this section, we present and discuss the numerical results for the case that there is no lead-time, that is, l = 0 or there is a lead-time of 1 day. for most instances, we observed that even obtaining an integer feasible solution to (3) and (6) in the time limit was not possible. therefore, we replaced these models with their expected value problem, where the stochastic demand is substituted with the expected demand. then, we solved the resulting model. this heuristic yields an integer feasible solution to model (3) and (6) for all instances we tested in the time limit, and we report those results here. in section 4.5.1, we provide the results on ventilator shortage and inflow/outflow from/to fema for the case that there is no lead-time. we also analyze the effect of early ramp-up in production and lead-time on ventilators' shortage in sections 4.5.2 and 4.5.3, respectively. in this section, for each setting ( , , ), we solved the expected value problem of model (3) under cases i-vi. a summary of ventilators' shortage results is reported in tables 1-3. we briefly describe the columns in these tables. column "total" denotes the expected total shortage, and is calculated as where w n,t ≔ (d t,n − i t,n ) + , d t,n ≔ ∑ ∈ω p d n,t , i t,n ≔ y 0,n + fema t,n , and fema t,n ≔ min { ∑ t ′ ≤t x t,n , (d t,n − y 0,n ) + } . quantity "worst day" in column "worst day (t)" denotes the expected shortage in the worst day, and is calculated as where t denotes a day that the worst expected shortage happens, that is, t ∈ arg max t∈ ∑ n∈ w n,t . moreover, quantity "worst day-state" in column "worst day-state (t)" denotes the expected shortage in the worst day and state, and is calculated as where (t, n) ∈ arg max t∈ arg max n∈ w n,t . the results in tables 1-3 suggest that when up to 60% of a state's ventilator inventory is used for non-covid-19 patients, fema's current stockpile of 20 000 ventilators is nearly sufficient to meet the demand imposed by covid-19 patients in mild cases (ie, cases i-iii). the ventilator availability situation gets worse in the case where 75% (or greater %) of the available ventilators must be used for non-covid-19 patients and states' risk-aversion parameter to send the idle ventilators to fema to be used in a different state is 3. in this case, if states are willing to share up to 50% of their excess inventory with other states, then 12 700 number of ventilators will be required beyond fema's current stockpile to meet demand in cases i-iv. however, if no such sharing is considered, then the need for ventilators would increase to 14 200. this situation gets even worse for cases v and vi, where the inventory shortfall on the worst day (april 19, 2020) is between 17 200 and 30 600. this shortfall decreases moderately to 15 900-28 000 if states are willing to share part of their initial ventilator inventory. if parameter goes down to 1.25, the inventory shortfall on the worst day (april 19, 2020) is between 13 800 and 22 800. this shortfall decreases moderately to 12 800-21 300 if states are willing to share part of their initial ventilator inventory. we also analyzed the ventilators' reallocation to/from different states for the setting ( , , ) = (0.75, 3, 0), which is the most dramatic case we considered from the inventory and stockpile perspectives. we report a summary of results in table 4 under the two worst demand situations, cases v (severe) and vi (extreme). column "total inflow" in this table denotes the total incoming ventilators to a state n ∈  from fema, and is calculated as similarly, column "total outflow" denotes the expected total outgoing ventilators from a state n ∈  to fema in order to be shared with other states to be used to treat covid-19 patients, and is calculated as also, column "net flow" represents the difference between "total inflow" and "total outflow." the results in table 4 indicate that in cases vi (severe) and v (extreme), the state of new york requires between 11 100 and 17 500 additional ventilators for covid-19 patients during its peak demand. however, between 400 and 17 000 of these ventilators can be given to a different state after the peak demand in the state of new york has subsided. the insights about other states can also be obtained from this table. the effect of early ramp-up in production on ventilators' shortage in this section, we consider the cases that the ramp-up in production happens on april 1, 2020 or april 7, 2020, as opposed to the baseline production, where the ramp-up in production happens on april 15, 2020. a summary of ventilators' shortage is given in table 5 for the parameter setting ( , , ) = (0.75, 3.00, 0), under the two worst demand situations, cases v (severe) and vi (extreme). as it is evident from table 5 , early ramp-up in production could save up more than 80 000 and 100 000 lives in case v (severe) and case vi (extreme), respectively. in this section, we analyze ventilators' shortage for the case that there is a lead-time of 1 day. a summary of results under case vi is presented in table 6 . it can be seen from this table that, as expected, the inventory shortfall increases with an increase in the lead-time (approximately up to 500 on the worst day). a companion online planning tool is developed in order to view the outputs on the number of ventilators needed and the shortage that might happen under various conditions (covid-19 planning tool, 2020) . this website is available at https://covid-19.iems.northwestern.edu. the users can choose the demand scenario (cases i-vi) and choose different options for parameter , the fraction of ventilators used to care for non-covid patients, parameter , state's willingness to share the fraction of their initial covid-19-use ventilators, parameter , the state's risk-aversion to sending their idle ventilators to fema for use in a different state, and parameter l for lead-time. the results on the website are shown in interactive graphical and tabular formats. a snippet of this online planning tool is given in figure 4 . interested readers can refer to this online companion for more details and analysis beyond what is presented in this paper. the results on covid-19 planning tool (2020) will be updated as additional computations are conducted and new forecast confidence intervals become available. we have presented a model for procuring and sharing life-saving resources whose demand is stochastic. the demand arising from different entities (states) peaks at different times, and it is important to meet as much of this demand as possible to save lives. each participating state is risk averse to sharing their excess inventory at any given time, and this risk-aversion is captured by using a safety threshold parameter. specifically, the developed model is applicable to the current covid-19 pandemic, where many u.s. states are in dire need of mechanical ventilators to provide life-support to severely and critically ill patients. computations were performed using realistic ventilator need forecasts and availability under a wide combination of parameter settings. our findings suggest that the fraction of currently available ventilators that are to be used for non-covid-19 patients strongly impacts state and national ability to meet demand arising from covid-19 patients. when more than 40% of the existing inventory is available for covid-19 patients, the national stockpile is nearly sufficient to meet the demand in mild cases. however, if less than 25% of the existing inventory is available for covid-19 patients, the current national stockpile and the anticipated production may not be sufficient under extreme demand scenarios. as expected, the magnitude of this shortfall increases when one considers more and more extreme demand scenarios. overall, the model developed in this paper can be used as a planning tool/framework by state and federal agencies in acquiring and allocating ventilators to meet national demand. the results reported in this paper can also provide a guide to states in planning for their ventilator needs. we, however, emphasize that these results are based on certain modeling assumptions. this includes the process of demand forecast scenario generation, estimates of initial ventilator inventory, and future production quantities. each one of these, as well as other model parameters, can be changed in the model input to obtain more refined results. nevertheless, an important finding is that a state's willingness to share its idle inventory can help address overall shortfall. while this paper has focused on ventilator needs in the united states, such a model can also be adapted for use in international supply-chain coordination of equipment such as ventilators across countries. covid-19 is expected to have different peak dates and demand cycles in other countries, and one or two additional disease spread cycles are likely until an effective vaccine becomes available. in conclusion, we point out that the model developed in this paper has a one-time planning decision, that is, there are no "wait-and-see" decisions in the model over time. one can also formulate the ventilator allocation problem as a time-dynamic multistage stochastic program, where the decision maker can make recourse decisions as time evolves based on the information available so far on the stochastic demands and past decisions. we are currently working on such an extension. in addition to the model being a one-time planning decision model, the model and its output have some additional limitations. first, the model may have multiple optimal solutions. in a resource constrained environment, alternative solutions may allocate the same number of ventilators differently. the solutions reported in the tables are only one such solution. moreover, these solutions were obtained from solving the models approximately with a prespecified time limit. the solutions from the optimization model presented in this paper depend on the accuracy of ventilator need forecasts. these forecasts are being revised regularly, as additional data based on state-specific mitigation efforts is becoming available. second, the objective function in the model treats the shortfall in large and small states equally. state-specific consideration may allow further refinements to the model. specifically, instead of formulating the objective as an expected value minimization model, we can formulate the objective as a minmax objective of shortfall for each state-thus minimizing the maximum shortfall to any state. such a model is expected to yield a more equitable solution. third, the model in section 4.5.3 assumes a constant lead time. we can modify this model to allow for state-specific lead times. such a modification will allow one to systematically study the effect of state-specific lead time on the overall allocation efficiency. moreover, if shipment times are of concern and a secondary coordination to a stocking depot is required, the model can be adapted to allow for creation of transshipment depots (warehouses) that serve a cluster of states. in this case, the central agency will first ship the inventory to the warehouse, who will further distribute it to the states in need. in all of these aspects the modeling framework presented in this paper should be considered as a first step in the direction of developing planning models that allow for critical resource sharing over time. nevertheless, the overall conclusions based on the model remain valid: in a resource-constrained environment where the demand of different entities peaks at different time points, it is possible to achieve improved efficiency in resource utilization through supply-demand matching over time. risk aversion to sharing excess supply in anticipation of future demand reduces the efficiency resulting from such sharing. this work has been partially supported by the national science foundation through grant cmmi-1763035. the authors would like to thank ebru bish and nan liu for the constructive comments and suggestions. the authors thank ming hu for getting an expedited review of this paper, and making several suggestions that helped improve an earlier draft of this paper. specifically, the lead time model, and the sensitivity analysis for the production function was added in response to his suggestions. orcid sanjay mehrotra https://orcid.org/00000003-1106-1901 geographic prioritization of distributing pandemic influenza vaccines contact tracing to control infectious disease: when enough is enough. health care management science models for managing the impact of an epidemic introduction to stochastic programming principles of scarce medical resource allocation in natural disaster relief: a simulation approach coronavirus outbreak: trump invokes defense production act (dpa) covid-19 ventilator allocation and sharing planning tool a linear programming model for allocating hiv prevention funds with state agencies: a pilot study stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm where efficiency saves lives: a linear program for the optimal allocation of health care resources in developing countries using a queueing model to help plan bed allocation in a department of geriatric medicine stockpiling ventilators for influenza pandemics the continuing 2019-ncov epidemic threat of novel coronaviruses to global health -the latest 2019 novel coronavirus outbreak in wuhan ihme covid-19 health service utilization forecasting team murray cj. forecasting covid-19 impact on hospital bed-days icu-days ventilator days and deaths by us state in the next 4 months ihme covid-19 projections johns hopkins university covid-19 resource center containing 2019-ncov (wuhan) coronavirus. health care management science mapping us health system capacity needs to care for covid-19 patients march 31 white house briefing from coronavirus task force estimates of the demand for mechanical ventilation in the united states during an influenza pandemic ny governor andrew cuomo holds coronavirus briefing-nbc news facing covid-19 in italy -ethics logistics and therapeutics on the epidemic's front line mechanical ventilators in us acute care hospitals lectures on stochastic programming: modeling and theory state health systems strained as coronavirus outbreak spreads united states resource availability for covid-19 a simulation study of the winter bed crisis characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china chinese center for disease control and prevention a medical resource allocation model for serving emergency victims with deteriorating health conditions a conceptual framework for allocation of federally stockpiled ventilators during large-scale public health emergencies a model of supply-chain decisions for resource sharing with an application to ventilator allocation to combat covid-19 key: cord-008686-9ybxuy00 authors: everett, tom; douglas, jenny; may, shoshanna; horne, simon; marquis, peter; cunningham, richard; tang, julian w title: poor transmission of seasonal cold viruses in a british antarctic survey base date: 2019-03-14 journal: j infect doi: 10.1016/j.jinf.2019.03.007 sha: doc_id: 8686 cord_uid: 9ybxuy00 nan recently, it is reported in journal of infection that h5n6 1 and h7n9 2 subtype avian influenza virus may have an increased pathogenicity to humans.the h1n1 subtype influenza virus has emerged in china. not only the h1n1 (95%) subtype but also several h3n2 (5%) subtype influenza viruses have been detected in samples. according to chinese national influenza data ( http: //ivdc.chinacdc.cn/ ), the h1n1 subtype influenza virus was prevalent from the end of 2018 to the beginning of 2019. the h1n1 and h3n2 subtypes of influenza virus are resistant to adamantanes (amantadine and rimantadine), and a small number of h1n1 strains have been found to be less sensitive to na inhibitors (nais; oseltamivir, zanamivir, and peramivir). in this study, we briefly evaluated the evolution patterns of the h1n1 influenza virus and the h3n2 influenza virus. we collected non-repeat 674 h1n1 and h3n2 subtype influenza virus sequences isolated in china over the course of nearly 5 years from the global initiative on sharing avian influenza data (gisaid) database ( www.gisaid.org ) and national center for biotechnology information (ncbi) ( www.ncbi.nlm.nih.gov/genomes/flu ). the haplotype network map shows that the 2014 h1n1 strain has a node in common with the 2016/17 h3n2 strain ( fig. 1 ) , and h1n1 and h3n2 often co-infect the same patients. when multiple strains of influenza infect the same host, they may undergo recombination and reassortment of the gene fragment, which greatly changes the pathogenicity and epidemiological characteristics of the virus. currently, the h1n1 subtype influenza virus is widespread in the population, and the number of children with neurological symptoms increased significantly this year. 3 the influenza virus has shown some variation, and whether this variation occurs along h1n1 and h3n2 lines remains to be seen. when multiple viruses co-infection occurs, it becomes possible for the viruses to undergo genetic communication, which may change the direction of viral evolution and so deserves our attention. we calculated the average gene evolution rate (nucleotide replacement rate) of the h1n1 influenza virus and the h3n2 influenza virus between different years from 2013 to 2019. it can be seen that the genetic evolution rate of the h1n1 influenza virus and the h3n2 influenza virus is 2.91e −5 -4.03e −4 and 2.72e −5 -1.05e −4 ( table 1 ) , respectively, in the same year and there are up and down fluctuations that may be related to the subtypes that were prevalent that year. additionally, we calculated the evolution rate of the h1n1 influenza virus from the end of 2018 to the beginning of 2019 (1.13e −3 ). a large increase in the rate of advancement indicates a rapid change in the virus in the short term, triggering changes in the replication, resistance, and transmission of the virus. the type a influenza virus emerges periodically every year,influenza undergoes continuous evolution, and different lineages have appeared. 4 at the same time, under the action of vaccines and drugs, the antigenicity and antigenic site of the virus are transformed and resistant. 5 however, medical science has also improved. it is necessary to determine the frequency of gene exchange between different subtypes and be alert to possible variations in gene communication between different subtypes. pu et al. 6 and shi et al. 7 reported that duck-derived virus h7nx and recombination of h7n9 can produce new h7n2 that can cause disease death in waterfowl. this recombination event may have occurred in 2013 or earlier, but the recombinant virus has a distinct evolutionary advantage given the use of a vaccine. exchange between h1n1 and h3n2 may allow them to give each other different viral characteristics, hence, ongoing surveillance of h1n1 and h3n2 subtypes of influenza is warranted. the authors declare not conflict of interest. recent studies in this journal revealed that some h7n9 viruses reassorted with duck aivs, and then attained the ability to efficiently infect ducks. 1,2 h7n9 aivs have been endemic in chicken since their emergence in china in february 2013. 3 after its emergence, h7n9 viruses have evolved substantially, and have frequently reassorted, acquiring internal genes from other chicken h9n2 viruses, increasing the genetic diversity of h7n9 viruses. 4 this raises the concern that whether h7n9 can attain internal genes from other aivs. thus, we collected all available h7n9 sequences to detect potential novel reassortments of the h7n9 aivs, and found evidences that three human-isolated h7n9 isolates attained internal genes from duck and human aivs. all available sequences of h7n9 aivs were downloaded from the ncbi ( https://www.ncbi.nlm.nih.gov ), gisaid ( https://www.gisaid.org ) and fludb ( https://www.fludb.org ) public databases. then, phylogenetic trees for ha, mp, np, ns, pa, pb1, pb2 and na genes were reconstructed, respectively, using raxml v.8.0.24 with gtrgamma model, and 10 0 0 bootstrap tests. phylogenetic analyses revealed that five genes (np, ns, pa, pb1, and pb2) of a/fujian/33845/2017(h7n9), mp gene of a/gd-66/2014/h7n9/2014-01-29, and two genes (pb1 and pb2) of a/zhejiang/9/2014(h7n9) did not clustered with chicken h7n9 aivs, respectively ( fig. 1 and supplementary fig. 1 ). further, blastn ( https://blast.ncbi.nlm.nih.gov/blast.cgi ) was used to search the homology sequences of these three abnormal strains ( 5 while domestic ducks act as an interface between the natural gene pool and terrestrial poultry in the influenza virus ecosystem. the 2013 h7n9 viruses cannot replicate efficiently in ducks in the first four waves. however, studies have indicates that the highly pathogenic h7n9 virus has extended its host range by acquiring genes from duck influenza viruses and has now adapted to ducks. 1, 2, 6 the reassortments between 2013 h7n9 and duck aivs would further raise the diversity and spread of the h7n9. 7 in addition to our finding of the reassortments between duck aivs and the human-isolated h7n9 viruses suggest that surveillance and control of duck aivs is critical for the control of h7n9 viruses, which was almost ignored previously. it's surprising that the pb1 and pb2 genes of a/zhejiang/9/2014 (h7n9) showed the most closed relationship with human h3n2 viruses (89% and 88% identity respectively, table 1 ). reassortments between human and avian aivs can make the reassortant viruses replicate efficiently in mammalian hosts. 8 it's very possible that the reassortment between h7n9 and human h3n2, may make the reassortant virus more adaptive to human, even attain the ability to efficiently transmit between humans, and thus raise great thread to the public health. historically, several pandemic human influenza viruses were derived from reassortant virus between avian and human influenza viruses. for example, the h1n1/pdm2009 virus, which was a swine reassortant virus that attained the pb1 from human h3n2, was rapidly transmitted between humans and then, globally circulates as a seasonal virus, posing a substantial risk to human. 9 h7n9 aivs have the genetic makeup associated with human infections, 10 in addition to our finding that the reassortant human-isolated h7n9 virus attained the pb1 and pb2 from human h3n2, possibility like the h1n1/pdm2009 virus, the reassortant virus would become more invasive to humans, and thus pose serious pandemic threat to humans. h7n9 is a novel reassortant aiv subtype, which has surpassed h5n1 in laboratory-confirmed human infections despite its limited dissemination outside of china. thus, whether this subtype could acquire the ability to efficiently human-to-human transmission, and become a new influenza pandemic raise great attention. although a h5/h7 vaccination in chicken has successfully decreased the prevalence of the h7n9 viruses in chicken, our findings that h7n9 viruses attained internal genes from human and duck aivs raise concerns about the potential ability of the viruses to increase their diversity and spread, and especially the possibility to develop better ability to infect human, and eventually attain efficient human-human infections. we note with interest these previous studies into household and hospital influenza outbreaks. 1,2 such community and hospitalbased respiratory virus transmission and outbreak investigations often suffer from the potential confounding arising from possible exposures to undiagnosed index cases outside of the outbreak cohort, leading to an overestimate of virus transmissibility, and potentially unnecessary costly and restrictive infection control interventions. to avoid such confounding, we performed a small pilot study in a closed population of adult research scientists ( n = 43 out of a possible 48). all participants signed informed consent forms, following ethical approval from plymouth university ethics committee. these scientists were confined to a british antarctic survey base for 1 month (march 2017), during which no personnel entered nor left the base. therefore any detectable human respiratory viruses could only have been brought into the base by personnel at the beginning of this 'closed period'. participants were given anonymous codes to maintain confidentiality. each agreed to give nasal swabs (collected in virus transport medium, virocult, medical wire and equipment ltd, corsham, wiltshire, england) upon entry (day 0, 14/3/17), then at days 4 (18/3/17), 10 (24/3/17) and 17 (31/3/17) post-entry. all viral swabs were stored at −80 °c until they could be shipped back to the uk and tested at the leicester royal infirmary. this was performed using a respiratory multiplex pcr assay (16-well, ausdiagnostics uk ltd., chesham, uk) that could detect any of: influenza a, b, respiratory syncytial virus (rsv), parainfluenza (piv) types 1-4, human metapneumo (hmpv)-, entero-/rhino-, corona-(229e, oc43, nl63, hku1) and adeno-viruses. no specific instructions about infection control were given to the participants. they were left to act as they would normally behave throughout the period of the study. any participants who developed any of 9, self-assessed, influenza-like symptoms (fever, cough, stuffy nose and/or sinuses, headache, sore throat, myalgia, fatigue, shortness of breath, nausea or vomiting) would complete a tick-box questionnaire (on a scale of 1-'very mild' to 5-'very severe') to describe the relative severity of their symptoms. this same questionnaire also requested the contact intensity (i.e. number, nature and frequency) of their daily contacts with other participants as a self-assessed, linear graded score (from 1-'sharing just one meal together' to 5-'spending the majority of the day and evening with the other person'), depending on the frequency of contact whilst working, eating meals and socialising together. the daily location of all personnel in any of the four station zones at 0830, 110 0, 140 0, 1630 and 20 0 0 h was also recorded routinely for safety and security, using a 'tagboard' system. out of the 43 participants who consented, 3 later declined to have any viral swabs taken, and of the resulting 160 (i.e. 40 × 4 swabbing time-points) possible swabs, 153 were successfully collected and stored for testing. testing the incubation period of human coronaviruses is around 2-5 days, 3 which can be used to link symptomatic cases together, epidemiologically, 4 with viral shedding being reported for up to 6 days post-symptom onset. 5 so the symptoms and positive nl63 and oc43 results for participants 21 and 74, respectively, could have been acquired from participants 40 and 47, who may have been the original index cases (sources) for these viruses. although no respiratory virus was detected in their samples, participants 40, 47 and 107 all reported similar symptoms to those of 21 and 74 during the study period, which were typical common cold symptoms. note that the participants' self-reported contact intensities were not entirely robust, e.g. both participants 40 and 47 list participant 21 as a contact, so could both have been index cases for him/her. however, 21 did not list either 40 or 47 as a contact (such contacts should be reciprocal). this may have just been a simple oversight, but it makes the contact link less reliable. similarly, participant 74 could have served as the index case for participant 107, but neither lists the other as a contact. regardless of the contact intensities reported in the questionnaires, there were no secondary cases of either nl63 or oc43 coronaviruses detected in any of the other study participants' weekly swabs. one possible explanation for this may have been an insufficient sensitivity of the assay to detect low levels of these respiratory viruses. the limit of detection (lod) for the ausdiagnostics assay varies significantly with each virus (as given in the kit insert, all in copies/ml): influenza a (1900-2375), b (525), rsv (50-2125), piv types 1-4 (50-250 0), hmpv (10 0-625), entero-/rhino-(75-1025), corona-(229e, oc43, nl63, hku1) (1350-4175) and adeno-(1075) viruses. however, in the acute infection stage respiratory viruses are generally present in relatively high copy numbers, with median values of mostly 4-8 log 10 (i.e. 10,0 0 0-10 0,0 0 0,0 0 0 copies/ml) for adeno-, corona-, hmpv, influenza, piv and rsv, as reported in one comprehensive paediatric study. 6 although children generally shed higher viral loads than adults, it is likely that the coronavirus loads in acutely infected adults would still be mostly detectable on this assay, which is approved (i.e. ce-marked) for routine diagnostic testing. yet, it is still possible for viruses that are infecting individuals at the lowest loads within these ranges, to fail to be detected by this assay. given the results that are currently available from this study, one of the key questions is: from where did the nl63 and oc43 coronaviruses arise? in addition, the lack of any secondary nl63 or oc43 coronaviruses cases (symptomatic or asymptomatic) arising from the known positive sources (participants 21 and 74), suggests that the transmissibility of these common cold viruses may be limited. this seems unexpected, given the potential stress on the body immune system whilst living and working in such an extreme environment. however, such relatively poor transmission of respiratory viruses has been previously described in antarctic base personnel for rhinovirus and adenovirus, 7, 8 for reasons that are still unclear. another potential confounding factor is the unknown status of the 5 individuals who were also present at the base (mostly base personnel) but who declined to participate in the study. it is possible that one or more of these non-participants could have been the original sources (i.e. index cases) of the nl63 and oc43 coronaviruses at the start of the study. whilst there are some limitations to this study, there are plans to repeat this on a larger scale, over a longer 'closed' period, with the use of real-time, point-of-care testing (poct) to detect such respiratory viruses. although previous respiratory virus outbreaks have been described in remote research bases, 7,8 these were unable to utilise the greater sensitivity and spectrum of respiratory viral targets provided by modern, molecular, diagnostic tools. 9, 10 thus, some positive cases in these earlier studies may have been missed, leading to an underestimate of the transmissibility of these respiratory viruses in these populations. respiratory infections in research personnel can impact significantly on their productivity, an important consideration when their time at such remote research bases is limited. this and future studies will enable medical teams to enhance the healthcare of research base personnel to optimise their precious research time spent there. none of the authors have any conflicts of interests to declare. we thank the following for their support of this study: uk clinical virology network (cvn), for general funding support; medical wire & equipment ltd., for donating some of the sampling swabs; ausdiagnostics uk ltd., for donating the respiratory multiplex pcr tests. none of these companies were involved in the writing of this article. we read with interest, the article by poller et al. 1 , in this journal, entitled "a unified personal protective equipment….". we understand the importance of proper personal protective equipment(ppe) as an integral component of healthcare workers(hcw) protection in outbreak situations of infections with possible high consequence. but at times, such an outbreak occurs in an unsuspected region, when initial cases present in early course of illness before the development of ominous clinical features. medical staff, particularly in busy rural set ups of resource-poor developing countries may discover that they have been exposed to a high consequence infectious disease after the event of exposure, particularly if these centres are unaware, reluctant or unequipped regarding routine use of ppe. a similar situation occurred in a rural healthcare setting of kerala, india, during the may-2018 outbreak of nipah virus (niv), killing 21 out of 23 reported cases. 2 a 26-year-old male ( index case of the outbreak report 2 ) from kerala's perambra town died undiagnosed with fever, en-cephalitis and respiratory distress in government medical college kozhikode(gmck), after being transferred from taluk hospital, perambra(thp). another 47-year-old male patient ( case-10 ) 2 was admitted in thp for an acute febrile illness and recovered while the unsuspected index case was being treated there in the adjacent bed. two weeks later, case-10 presented to taluk hospital, balussery(thb) (the setting of our intervention), with complaints of fever, headache and vomiting of 4 days duration. he was treated there as inpatient for about 24 h after which he was referred to gmck, owing to clinical deterioration. the next day he developed altered sensorium, respiratory distress and expired. till then, there was no suspicion about the niv outbreak situation, as kerala is at least 2500 km away from the last known outbreak in the indian subcontinent in 2012 3, 4 . in the meanwhile, the brother, father and aunt of the index case , and a nurse, who cared for him at thp, developed similar clinical features of acute encephalitis with respiratory distress and got admitted. all of their samples, along with that of case-10 , were tested positive for niv from the reference laboratory. the state public health authorities swiftly declared the outbreak and ensured containment and protective measures. however, by this time, 17 out of 18 confirmed niv cases were already infected and fell ill, being epidemiologically related as contacts of the index case in family, during transit to healthcare facilities or in hospital 2 . here, we report our experience with eight hcw including two doctors (authors aps and mb of this correspondence) and six nurses working in thb, who had unsuspected, inadvertent, yet significant exposure to case-10 when he was admitted there, without any ppe. both the doctors had closely clinically examined case-10 and the six nursing-staff had repeated bare-hand, unmasked contacts with him ( table 1 ). all of them were extremely panicked once the outbreak notification was out and beseeched aps and mb for an immediate solution. aps and mb contacted the other authors for advice regarding any possible post-exposure prophylaxis(pep). considering the possibility of human-to-human (airborne / contact) transmission of niv, 3, 4 in-vitro 5 and in-vivo 6 effects of ribavirin on niv, evidence of safety and efficacy of short-course high-dose ribavirin pep(rpep) used for lassa fever 7 and unavailability and inexperience of any other alternatives (favipiravir or monoclonal antibody m102.4), were discussed by vkmn, sb, ms, nw and ab, and a consensus opinion of rpep as the only available and reasonably safe option was placed before aps and mb. the importance of psychological factors 8 in the hcws were also considered seriously. the suggested dose was 10 0 0 mg thrice daily for 14 days(cumulative 42,0 0 0 mg) in congruence to the lassa fever recommendation. 7 all the contacts started rpep within 72 h of exposure. the mean cumulative dose of rpep taken by the contacts was 28,750 mg(17,60 0-40,0 0 0 mg) and the mean duration was 12.5 days(11-14 days, table 1 ). their clinical and laboratory parameters were monitored for the next 6 months. mean age of the 8 hcws was 35.4 years(30-43 years). two were males and rest females; none were pregnant ( table 1 ) . most of them experienced minor side effects like fatigue, headache, nausea, dry mouth and palpitations. there was a mean drop of 2.82 g/dl of haemoglobin, predominantly between days 17 and 21 after starting rpep, which started rising in all within a week of stopping rpep. bilirubin levels rose by a mean of 1.65 (range: 0.0 -3.6) mg/dl in 7 of the 8 hcw ( fig. 1 ) . none of them ultimately contracted niv disease. interestingly, one 25-year-old male patient ( case -23 of the outbreak report 2 ), was admitted at an adjacent bed in thb with dysentery, while case-10 was admitted. he was present within a distance of 1 metre for more than 5 h. there was apparently no direct contact between them, except that same hcw served both of them sharing non-critical medical devices. after recovery, case -23 was discharged from thb, to return after a week to gmck with high grade fever, developing encephalopathy and respiratory distress, diagnosed to have niv infection and succumbed to it. in the current outbreak, 3 family members, one staff nurse, one trainee nurse, one radiology assistant who took care of the index case and 13 hospital contacts contracted the infection, proving human-to-human transmission. respiratory aerosols and fomites cause human-to-human spread. 3 the mortality rate, in all recent niv outbreaks in the indian subcontinent is 75-100% with the bangladeshi strain 3 (niv-b) and its close relative in the recent kerala outbreak, 2 has been consistently almost double compared to ebola. further that case -23 getting infected from case-10 in the same premises of thb, makes our case for rpep stronger. the 2 survivors out of 23 infected patients in this outbreak have also received ribavirin. 9 given the recent findings of widespread presence of niv among pteropus bats in india, 10 another outbreak might be just a matter of time. our field notes from emergency, voluntary, off-label rpep among hcw provides evidence, albeit low-quality, of its safety and probable efficacy, strongly suggesting a pre-planned trial for pep to be started immediately once such an explosive outbreak of niv is notified. none. we note the previous report describing a decreasing incidence of eosinophilia in returning travellers by barrett and colleagues. 1 in contrast, another type of hazard reported by returning travellers -monkey bites -appears to be increasing. this makes it necessary for our frontline medical staff to be aware of the potential risks from rabies and simian herpes b virus (shbv or cercopithecine herpesvirus 1 -cehv-1) associated with this type of exposure. 2 although infections are rare as a consequence of bites, 3,4 both viruses can result in very high (80-100%) mortality if the appropriate post-exposure prophylaxis is not initiated promptly. in view of these serious consequences, post-exposure protocols have been developed to reduce likelihood of infection. 2 , 5-7 while this is agreed for rabies, 5, 8 post-exposure prophylaxis is not uniformly recommended for shbv, as cases have only been reported with captive monkeys, 3,9 despite numerous monkey bite exposures in regions where animals are thought to be infected. assessing risks versus benefits obviously needs to be done judiciously in each case, and national public health specialists can be consulted to support decision making. 8, 9 the main risk is primarily from monkey bites from macaque monkeys (genus macaca ), which are now encountered relatively frequently in various tourist areas in southeast asia (e.g. philippines, indonesia, malaysia, cambodia, vietnam, thailand). after a monkey bite, the patient should perform immediate wound cleansing: irrigation with soap and water, or other skincleansing detergent, or sterile water alone, for at least 15 min. later, when the patient presents to the emergency department (ed), all medical staff need to be aware of both the rabies and shbv post-exposure protocols (peps) associated with such bites. whilst most ed teams will likely know of the rabies pep protocol, 5 fewer will be aware of the guidelines for shbv pep. 6 along with the wound cleansing and post-exposure rabies immunoglobulin (rig) and vaccination, any risk of shbv requires that high dose acyclovir (preferably valaciclovir 1 g tds po; or acyclovir 800 mg 5 times daily po, for adults) pep for at least 14 days should be considered. immediate pcr and later serological testing for signs of shbv infection are possible. however, recommendations for such testing are somewhat variable, with some advising testing in symptomatic cases only, whilst others will test all potentially exposed cases, regardless of symptoms. 6, 7 symptoms of possible shbv disease include vesicular lesions, pain and itching near the bite site, local lymphadenopathy, flulike illness (fever, headache, myalgia, fatigue), and any focal or progressive neurological symptoms, including dyspnoea. outcomes are generally fatal (80% mortality without any treatment), once there is central nervous system involvement. 2, 7 however, with antiviral prophylaxis and treatment, such fatal outcomes are rarer. bacterial infections (e.g. staphylococcus and streptococcus spp.) can also arise from the bite itself, especially in children, for which systemic antibiotics can be given, 10 and tetanus vaccination. to highlight this issue, we present three cases of returning travellers with monkey bites. case 1: a 24-year old male was admitted with headache, lethargy and myalgia following a trip to indonesia (monkey forest, ubud, bali), where he sustained a penetrating bite to his right shoulder from a macaque monkey. there were no immediate post-bite complications. however, 16 days later, he developed paresthesia and neuropathic pain in his right thigh. after seeing a local physician, oral aciclovir 800 mg 5 times daily for 14 days was prescribed, as prophylaxis for possible shbv. one day later he developed a vesicular rash on his right thigh, which subsequently resolved on the antiviral therapy. on return to the uk, given this history, he was admitted and started rabies post-exposure immunisation, without rabies immunoglobulin (rig). he was then extensively investigated for possible shbv infection. he had five days of intravenous aciclovir and a further nine days of oral valaciclovir, whilst awaiting investigation results. diagnostic testing performed at the department of viroscience, erasmus medical centre (m/c), rotterdam, the netherlands on cerebrospinal fluid (csf), blood, lesion swab and saliva by shbv pcr showed no evidence of infection at that time. at outpatient review, two months later, there had been no further history of any rash or neurological symptoms, though the patient did mention several recurring episodes of genital herpes, for which he was given a 10-day course of oral valaciclovir 500 mg bd po. by this time, the risk of latent shbv was considered negligible and he was discharged from clinic. case 2: a 28-year old male was seen in clinic who gave a history of receiving an unprovoked, penetrating bite on his right upper arm from a vervet monkey ( chlorocebus pygerythrus , previously classified as cercopithecus aethiops ), one week earlier whilst on holiday in barbados. after immediate wound care, he was seen in a local clinic and received tetanus vaccine and oral antibiotics. no aciclovir shbv pep was commenced at this time. the bite wounds healed without complication. after returning to the uk a week later, an outpatient review revealed that he was still asymptomatic for any clinical features of shbv, rabies or other travel-associated illnesses. however, as a precaution, valaciclovir 1 g tds po, for 21 days was prescribed as shbv prophylaxis, due to the possibility of an incubating shbv infection. 3 blood and saliva samples were sent to viroscience for pcr testing to check for any residual shbv, dengue, chikungunya or zika virus infections. all tests were negative. the patient continued to remain asymptomatic, so was eventually discharged from outpatient follow-up. case 3: a 31-year old female was seen in clinic upon return from southeast asia, with a history of receiving a penetrating bite to her right upper arm from a macaque monkey, whilst visiting monkey island, vietnam, 18 days previously. she received her first dose of rabies vaccination (without rig) at a local clinic within four hours of the bite. a week later, whilst still in vietnam, she received a second dose of rabies vaccine from a different clinic, which also started her on acyclovir post-exposure prophylaxis for shbv. the bite wounds healed without sequelae. at her 18-day clinic review once back in the uk, she was still asymptomatic. baseline saliva and blood samples were taken and stored but not tested for shbv. she continued both the shbv and rabies pep whilst continuing her travels a week later, and remained asymptomatic six weeks post-exposure. this small case series demonstrates a diversity of presentations and follow-up management for these patients, notably: case 1 likely presented with genital herpes; case 2 sustained a bite from a vervet, not a macque, monkey; case 3 did not have any shbv testing. to our knowledge, all of these cases remain well. as there are no consensus guidelines available for managing such monkey bites, we suggest a precautionary approach and that to be safe, assume that both rabies and shbv are potential risks, regardless of monkey species. therefore, in the event of a monkey bite, where, after discussion with the patient (based on their individual clinical assessment), a decision is made to give prophylaxis: (i) immediately cleanse the wound for 15 mins with clean water + / − soap or detergent. consider appropriate antibiotic therapy to prevent skin infection (e.g. co-amoxiclav or amoxicillin), and tetanus vaccination. (ii) seek competent clinical help and obtain the first dose (day 0) of rabies vaccine + / − rig, depending on the risk assessment (in the uk, public health england tel: 0208 327 6204, 9-5 pm mon-fri). 8 complete post-exposure rabies vaccination with further doses on days 3, 7 and 21, post-exposure. 5 (iii) start acyclovir (1 g valacyclovir tds po or 800 mg acyclovir 5 times daily po -depending on local availability, for adults. adjust the dose as appropriate for children) as soon as possible after the bite, for at least 14 days, as post-exposure prophylaxis against shbv. (iv) if symptoms compatible with shbv develop within the next 2-4 weeks (e.g. vesicular lesions, pain, itching around the bite, local lymphadenopathy, flu-like illness, focal or progressive neurological symptoms), continue the acyclovir and seek further expert advice. (v) baseline samples (serum, saliva, wound swabs) can be taken and stored for comparison. if acute illness develops, repeat samples (including cerebrospinal fluid -csf) should be taken for diagnostic testing (by pcr) to check for shbv dna, and serology for shbv antibodies -if such testing is available. recent article in this journal has reported dengue patients were associated with a higher risk of autoimmune diseases than nondengue patients 1 . dengue has been a serious public health prob-lem in the world, the number of dengue epidemics has been on the rise worldwide. before 1970, only nine countries had experienced severe dengue epidemics; however, currently more than 100 countries have been severely affected by dengue 2 . after the first dengue-fever epidemic in china, which occurred in may 1978 in foshan, guangdong province, there have been regional outbreaks of dengue every year and the number of cases has increased. guangdong province is the area most seriously affected by dengue in china. the number of cases in guangdong accounts for more than 90% of the total number of cases in china 3,4 . however, the epidemic characteristics of dengue fever in guangdong province have not been reported since 2010 5 . a dengue epidemic is closely related to various factors, such as environmental conditions, imported cases, and migration; thus, its epidemic characteristics and patterns change rapidly. especially after the outbreak of dengue in guangdong province in 2014, the epidemic patterns have changed greatly. therefore, exploring the changing patterns of dengue outbreaks and epidemics in guangdong is of great significance to the prevention and control of dengue. this study aimed to investigate the changing patterns of the epidemic characteristics of dengue in guangdong province and to propose prevention and control measures. we collected dengue cases from 2008 to 2017 from the web-based disease reporting information system of the chinese national center for disease control and prevention. a total of 52,792 cases were reported for this period. population data were obtained from the statistics bureau of guangdong province ( http://www.gdstats.gov.cn/ ). a chi-square test was used to determine spatial differences in cities. this study was approved by center for disease control and prevention of pla review board. in terms of temporal distribution ( fig. 1 ) , dengue cases showed an increasing trend from 2008 to 2014 in guangdong province, and a decreasing trend from 2014 to 2017 (93 cases in 2008, 45,170 cases in 2014, and 1662 cases in 2017). the dengue outbreak in 2014, with 45,170 cases and 6 deaths, was the largest dengue outbreak in china in the past 20 years. before the outbreak in 2014, an average of 617 people was infected with dengue annually. following the outbreak, an average of 1305 people were infected with dengue annually-nearly twice the number of cases per year compared to that before 2014. dengue fever had typical characteristics in population distribution. adults aged 20-65 years accounted for 76.76% of patients, and children aged 0-10 years only accounted for 4.16% of patients. in terms of distribution across occupations, housekeepers and the unemployed (23.1%), retired individuals (12.82%), those involved in commercial services (11.52%), industrial worker (9.56%), and students (7.49%) formed the majority of patients with dengue fever. these five groups accounted for 64.48% of the total number of cases. the number of male patients was comparable to that of the number of female patients (male: female = 1:1.01), and the mortality rate of dengue was low at only 0.11 ‰ . monitoring data of the past 10 years showed that although the number of cases increased greatly after the outbreak in 2014, the population distribution did not change significantly. the key population for the prevention and control of dengue fever were adults over the age of 20 years. although cases have been reported in various regions, there are significant differences ( p < 0.001) in the distribution of dengue among the different cities in the guangdong province ( fig. 2 ) . incident cases were mainly concentrated in guangzhou city and foshan city. guangzhou city had a total of 40,170 cases during 2008-2017, which accounted for 76.09% of cases among the different cities. foshan city had a total of 4660 cases, which accounted for 8.83% of cases. however, according to annual data trends, 2014 was the turning point in spatial distribution patterns for dengue fever. from 2008-2013, the incidence of dengue was concentrated in foshan city, guangzhou city, jiangmen city, and zhongshan city (93.14% of cases). however, by 2014, except for meizhou city, there were case reports from all 20 cities in the province. among them, there were 13 cities with more than 200 cases each. after 2014, dengue fever was prevalent throughout the province. even in zhanjiang city, zhaoqing city and jieyang city, which are distant from guangzhou city, and the number of cases in remote areas increased significantly. this paper is the first to analyze the characteristics of the change in dengue epidemics in guangdong province in the past 10 years. in general, the incidence of dengue fever has typical spatial differences. the cases were mainly concentrated in urban areas with large populations and developed economies, such as guangzhou city and foshan city. theinflux of many migrants, especially those from southeast asia, could have caused the higher incidence of dengue fever in these areas. studies have shown that dengue epidemics in china were initiated by imported cases and then became prevalent in the local areas 6 . however, attention should be given to the large number of dengue epidemics in remote areas, which appeared after 2014. a possible reason for this could be global climate warming, which has caused the natural environments in some remote areas to become suitable for mosquito breeding. moreover, rapid development in tourism and trade has also led to an increase in local and imported cases. in summary, after 2014, the epidemic characteristics of dengue fever in guangdong province have undergone major changes. the key areas for prevention and control are no longer confined to traditional epidemic areas, such as guangzhou city and foshan city. instead, dengue prevention and control should be conducted throughout the province. in addition, according to the national dengue surveillance in 2018, dengue fever has shown a trend of "moving up north" in the country 7 . therefore, the changing characteristics of this epidemic warrant high attention of relevant departments. the authors declare that they have no competing interests. this work was supported by national key r&d program of china ( 2016yfc120 070 0 ), beijing nova program ( z17110 0 0 01117102 ) and military medical innovation project ( 16cxz046 ) and pla youth training project for medical science ( 16qnp127 ). the funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. pathogenicity and transmissibility of three avian influenza a (h 5 n 6 ) viruses isolated from wild birds evolving ha and pb 2 genes of influenza a (h 7 n 9 ) viruses in the fifth wave -increasing threat to both birds and humans? genetic and antigenic characterization of a(h 1 n 1 )pdm09 in yantai, china, during the 2009-2017 influenza season neurologic manifestations of influenza a(h 3 n 2 ) infection in children during the distinct molecular evolution of influenza h 3 n 2 strains in the 2016/17 season and its implications for vaccine effectiveness rapid evolving h 7 n 9 avian influenza a viruses pose new challenge rapid evolution of h 7 n 9 highly pathogenic viruses that emerged in china in 2017 hemagglutinin characteristics, changes in pathogenicity, and antigenic variation of highly pathogenic h7n9 avian influenza viruses in china two novel reassortant high pathogenic h7n9 viruses isolated in southern china in fifth wave shows internal genomic diversity and high virulence in chickens and ducks human infection with a novel avian-origin influenza a (h7n9) virus evolutionary dynamics of avian influenza a h7n9 virus across five waves in mainland china evolution and ecology of influenza a viruses rapid evolution of h7n9 highly pathogenic viruses that emerged in china in 2017 deciphering the sharp decrease in h7n9 human infections h5n1 hybrid viruses bearing 2009/h1n1 virus genes transmit in guinea pigs by respiratory droplet origins and evolutionary genomics of the 2009 swine-origin h1n1 influenza a epidemic potential pandemic of h7n9 avian influenza a virus in human case-ascertained study of household transmission of seasonal influenza -south africa haemagglutinin and neuraminidase sequencing delineate nosocomial influenza outbreaks with accuracy equivalent to whole genome sequencing incubation periods of acute respiratory viral infections: a systematic review identifying the probable timing and setting of respiratory virus infections human coronaviruses and other respiratory infections in young adults on a university campus: prevalence, symptoms, and shedding. influenza other respir viruses is higher viral load in the upper respiratory tract associated with severe pneumonia? findings from the perch study rhinovirus infections in an isolated antarctic station. transmission of the viruses and susceptibility of the population adenovirus 21 infection in an isolated antarctic station: transmission of the virus and susceptibility of the population a unified personal protective equipment ensemble for clinical response to possible high consequence infectious diseases: a consensus document on behalf of the hcid programme outbreak investigation of nipah virus disease in kerala, india nipah virus-associated encephalitis outbreak nipah virus contamination of hospital surfaces during outbreaks characteristics of nipah virus and hendra virus replication in different cell lines and their suitability for antiviral screening treatment of acute nipah encephalitis with ribavirin review of the literature and proposed guidelines for the use of oral ribavirin as postexposure prophylaxis for lassa fever post-exposure prophylaxis in ebola virus disease: don't forget the psychological factors adaptive immune responses in humans during nipah virus acute and convalescent phases of infection a spatial association between a nipah virus outbreak in kozhikode, kerala, india and nipah virus infection in pteropus bats the changing aetiology of eosinophilia in migrants and returning travellers in the hospital for tropical diseases expert review of evidence bases for managing monkey bites in travelers understanding primate herpesviruses rabies in nonhuman primates and potential for transmission to humans: a literature review and examination of selected french national data immunisation against infectious diseases virus working group recommendations for prevention of and therapy for exposure to b virus (cercopithecine herpesvirus 1) monkey bites in travelers: should we think of herpes b virus rabies and immunoglobulins service follow-up of monkey bites. phe publications gateway number keep children away from macaque monkeys! increased risk of autoimmune diseases in dengue patients: a population-based cohort study developing a time series predictive model for dengue in zhongshan, china based on weather and guangzhou dengue surveillance data the changing epidemiology of dengue in china, 1990-kk2014: a descriptive analysis of 25 years of nationwide surveillance data spatial and temporal patterns of dengue in guangdong province of china clinical and epidemiological features of the 2014 large-scale dengue outbreak in guangzhou city center for disease control and prevention of chinese people's liberation army, 20 dongdajie street xinying du center for disease control and prevention of chinese people's liberation army, 20 dongdajie street beijing 10 0 071, china hongbin song * center for disease control and prevention of chinese people's liberation army the authors declare not conflict of interest. supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.jinf.2019.03.008 . we thank the following for their support of this study: uk clinical virology network (cvn), for general funding support; medical wire & equipment ltd., for donating some of the sampling swabs; ausdiagnostics uk ltd., for donating the respiratory multiplex pcr tests. none of the authors have any conflicts of interest to declare. key: cord-249166-0w0t631x authors: booss-bavnbek, bernhelm; krickeberg, klaus title: dynamics and control of covid-19: comments by two mathematicians date: 2020-08-17 journal: nan doi: nan sha: doc_id: 249166 cord_uid: 0w0t631x we are asking: why are the dynamics and control of covid-19 most interesting for mathematicians and why are mathematicians urgently needed for controlling the pandemic? first we present our comments in a bottom-up approach, i.e., following the events from their beginning as they evolved through time. they happened differently in different countries, and the main objective of the first part is to compare these evolutions in a few selected countries with each other. the second part of the article is not"country-oriented"but"problem-oriented". from a given problem we go top-down to its solutions and their applications in concrete situations. we have organized this part by the mathematical methods that play a role in their solution. we give an overview of the main branches of mathematics that play a role and sketch the most frequent applications, emphasising mathematical pattern analysis in laboratory work and statistical-mathematical models in judging the quality of tests; demographic methods in the collection of data; different ways to model the evolution of the pandemic mathematically; and clinical epidemiology in attempts to develop a vaccine. since the covid-19 pandemic is not over it may appear to be premature to draw some conclusions. however, it may be, as well, just in time to recapitulate some lessons we as mathematicians should have learned and are urged to apply now. thus we are asking: why are the dynamics and control of covid-19 most interesting for mathematicians and why are mathematicians urgently needed for controlling the pandemic? we shall first present our comments in a bottom-up approach, i.e., following the events from their beginning as they evolved through time. they happened differently in different countries, and the main objective of this first part is to compare these evolutions in a few selected countries with each other. still, there are some general features, which we present separately as we are used to do in mathematics. they include the history of certain epidemics which have influenced the reactions of people in many countries, and some basic mathematical tools. in addition there is a common factor, which one of the present authors (kk) has defined on the 12th march 2020 in an e-mail to a german health office: "the extension and evolution of covid-19 in various countries and regions reflects the state of their health systems. this was for instance already very obvious in the case of ebola." it is in fact the public health component of the health system that plays a crucial role. the second part of the article is not "country-oriented" but "problem-oriented". from a given problem we go "top-down" to its solutions and their applications in concrete situations. we have organized this part by the mathematical methods that play a role in their solution. here is an example where specially much mathematics is needed: to develop a vaccine and the strategy for applying it without loosing sight of basic ethical principles. in the following the gentle reader may consult when necessary the book [kpp] for the basic concepts of epidemiology. demography as a mathematical subject area was already developed centuries ago well beyond its elementary beginnings. for a long time it remained the only mathematical tool in the study of the evolution of infectious diseases. here is a famous early example. in china, india and europe one tried to confer immunity against smallpox by infecting individuals slightly so they would contract a mild form of the disease and be immune afterwards. some of them died by this procedure but in 1766 the swiss mathematician daniel bernoulli showed by a demographical approach that the procedure would increase life expectancy if applied to everybody [di1] . nowadays evaluating the costeffectiveness of a public health measure is being done widely; it is based on methods of mathematical economy. the 19 th century saw the discovery of microorganisms as pathogens of many diseases and their study by mainly microbiological methods. the mathematical tools for following up an epidemic remained essentially demographical well into the 20 th century. a few physicians suggested that every epidemic ends because there are finally not enough people left to be infected, which is a naïve predecessor to the mathematical-epidemiologic concept of herd immunity (see sect. 8). nevertheless even the abundant literature on the influenza pandemic of 1918-19, wrongly called spanish flu, discusses only two possible ways for its ending: better clinical treatment and mutations of the pathogen. seen from a virological viewpoint the spanish flu was an extreme form of the so-called seasonal influenza. the virus which causes them can be one of a large variety, its genus being denoted by a, b, c or d, where some of them include several species. a is the most serious one; is has subtypes a(hxny), x = 1,...,18 and y = 1,...,11, where x and y represent proteins on the surface of the virus. the strategy for controlling the "normal" seasonal influenza epidemic is widely known even among laymen: identify the strain of the virus in autumn, develop a vaccine as fast as possible, and vaccinate people thought to be at risk. nevertheless the number of infections and of deaths by a seasonal influenza can be as high as those by some of the pandemics to be described now. the spanish flu was due to a(h1n1). pictures from that time show people wearing masks that resembled those used now. in the years 1957-58 another "digression" from seasonal influenza occurred, called the asian flu and caused by a(h2n2). it started in china and then became a pandemic, passing from neighbouring states through the uk and the usa. estimations of the number of cases vary around 500 millions and of the number of deaths around 3 millions. its beginnings looked much like those of the spanish flu but towards the end a vaccine became available, a predecessor to the ones being used now routinely against the seasonal flu. the hong kong influenza of 1968-69, generated by the virus a(h3n2), had similar characteristics and will not be described further. parallel to the entering the scene of these and other epidemics, and partly motivated by them, basically new mathematical tools of public health emerged in the first part of the 20 th century, preceded by a few studies in the late 19 th . they were twofold. the first tool was called a "statistical-mathematical model". its aim is the study of the influence of factors, also called determinants, on the health of people. such factors may for instance be a lack of hygiene or a polluted environment. a factor can also be a preventive or curative treatment by an immunization or a drug, respectively; in that case the main objective of a study is to estimate the efficacy of the treatment. sampling plans are statistical-mathematical models of a different but related kind. they form the basis of sample surveys, which are being done in profusion about covid-19, too, and not always very illuminating. the second tool is called "mathematical modelling of the evolution of an epidemic", or briefly "mathematical modelling". there are two kinds of it. first, one may aim at the epidemic curve, which is the cumulated number of cases up to a moment t as a function of covid-19 booß-bavnbek &krickeberg, 17 august 2020 5 t . in that case mathematical modelling serves to estimate or predict this curve under various assumptions on the infectivity of infected subjects. early predecessors are presented in [fin] , see figure 1 ; the question whether the infectivity remains constant or decreases played already a role. refined versions are still being used, in particular for covid-19 (sect. 7). early numerical simulation of an epidemic curve by j. brownlee, 1907, discussed in [fin] second, one may build so-called compartmental models (sect. 8). the first one, for measles, was published in 1889 by p.d. en'ko; see [di2] . around the year 1900 compartmental models for malaria appeared. then in the 1920s new models for the evolution of measles in closed populations were defined and intensively studied. they became very influential because they displayed already many basic features that reappeared later in mathematical models of epidemics in other and more complex settings. such tools found many applications. dealing with large epidemics mathematically was no longer a matter of demography alone, although that continued to be the main tool for estimating number of cases and deaths. statistical-mathematical models were employed to estimate the efficacy of antiviral drugs, for instance against hiv-infections, and the efficacy of various immunizations including those against forms of influenza. mathematical modelling of epidemics was used in planning strategies to eradicate smallpox, poliomyelitis, measles and perhaps others. the first articles on modelling influenza epidemics appeared in the scientific literature. planning a vaccination strategy involves both statisticalmathematical and mathematical models [hal] . these roads to progress may have produced a general feeling of success in dealing with epidemics. then in the period from 2002 to 2019 a few events occurred that evoked memories of previous pandemics and undermined such believes. in finally another zoonotic influenza appeared, popularly called bird flu and in scientific language highly pathogenic avian influenza (hpai). the main pathogen was an a(h5n1) influenza virus. it had been known long ago but reached a peak in the years 2013-2017. whether there existed an airborne transmission from poultry to humans was a hotly debated question with obvious economic consequences. the bird flu spread widely over the whole world but the number of known human cases remained small, just over 70. in addition to various forms of influenza and the epidemics generated by the corona virus sars-cov-1, sars-cov-2 or mers-cov, other epidemics occurred. it is instructive to compare them with those just mentioned, applying in addition mathematical yardsticks. we shall restrict ourselves to ebola epidemics. their most widespread outbreak was the western african ebola virus epidemic from 2013 to 2016, which caused 28,646 cases and 11,323 deaths. there is a fundamental difference in the evolution of a case of influenza or sars-cov-1 or sars-cov-2 on the one hand and of an ebola case on the other, which leads to a basic difference in their mathematical modelling (sects. 4 and 8). a carrier of an influenza or corona virus can transmit it to other persons well before the first symptoms appear, that is well before the end of the incubation period. a subject infected by ebola will become infective only around the end of the incubation period. he (if it is a man) could then be immediately isolated together with his latest contacts in order to avoid further extension of the infection, provided that there is a health service nearby to do it. therefore ebola did not spread to countries that have a sufficiently dense primary health care network but it caused much suffering in countries that do not have it. the strategy of who to control the epidemic was wrong. it insisted on drugs and the search for a vaccine (which became available only in december 2019) but neglected primary health care. for the present purpose it would even have been most useful to rapidly train village health workers and "barefoot doctors" as it had been done decades ago. only very few countries profited from the experiences of these premonitory 18 years to prepare much in advance for a possible, and probable, new outbreak of an epidemic. some others took appropriate measures only at the first signs of covid-19, and many started planning when the epidemic had almost reached its zenith. we shall sketch some examples. for simplicity we shall always describe the result of the strategy of a country by indicating its cumulated numbers of confirmed cases and deaths around the 1st june 2020. regarding the reliability of these data see sects. 5 and 6. we begin with those that had planned early. , the year after the sars-epidemic outbreak, the government established the national health command center (nhcc), which was to prepare the country for a possible new epidemic. from 2017 on it was headed by the popular minister of health, chen shih-chung, who had studied dentistry at the taipei medical college. the vice-president of taiwan from 2016 to 2020, chen chien-jen, had been minister of health from 2003 to 2005 after having studied human genetics, public health, and epidemiology at the national taiwan university and the johns hopkins university in the usa, followed by research. thus decisions about the control of covid-19 were taken by politicians competent in matters of health including public health. taiwan counts 23 million inhabitants and many of them travelled from and to china. from the 31st december 2019 on when who was notified of the epidemic in wuhan all incoming flights from there were checked, followed by controls of passengers arriving from anywhere. an "action table" was produced in the period 20th january to 24th february 2020, which listed 124 measures to be taken. the public obtained daily revised clear information by all existing means. "contact tracing", which means repeated followup of symptomatic persons, of confirmed cases and of all of their contacts, was rapidly established on the basis of the electronic health insurance card that everybody has. the virological pcr-tests used (sect. 4) were already available and quarantines well organised. in late january rules about the wearing of masks were edited; a sufficient supply existed already. as a result 442 confirmed cases had been found and 7 deaths recorded up to the 1st june. the vietnamese strategy resembles the taiwanese one in almost all aspects, with the exception of contact tracing. a steering committee to deal with new epidemics existed within the ministry of health. it put into effect its plan right after the 23rd january when the first infected persons arrived at vietnamese airports, among them a vietnamese returning from the uk. all schools were closed on the 25th january, and since the 1st february everybody entering vietnam must spend two weeks in quarantine. other measures were imposed or relieved in accordance with the evolution of the epidemic, for instance a limited confinement or the wearing of masks. the ministry of health issued regular precise and clear information for the entire population by all available means including smartphones. in addition there is a personalized information system by so-called "survival guides" given to everybody. every survival guide defines three categories of persons: f0: a confirmed case; f1: suspected to be infected or having had contact with an infected person; f2: having had contact with a person in f1. each person is expected to find the category to which it belongs. the survival guide then provides printed information about what she or he must do as a function of her or his category, for example to submit to a test. only pcrtests are being used. in contrast to taiwan contact tracing does not use electronic tools. it is being done by the population itself, aided by the survival guides, together with a large number of well-trained members of the health services, for example university lecturers. at the end of 2019 vietnam had 98,257,747 inhabitants. on the 1st of june there had been 328 confirmed cases and 0 deaths. these data are based on a strong demographic section of the "general statistical office" and on several health information systems [kkr] and can hardly be contested. the preceding sketch of control measures in taiwan and vietnam has shown us the three main components of their epidemiologic side: contact tracing; lock-down, that is physical, or social, distancing in the wide sense including quarantine and border controls; wearing of masks. we may call this the "surveillancecontainment strategy". in addition there is the medical-clinical side, from primary health care such as general practitioners up to large hospitals. its state is crucial to the number of deaths caused by the virus sars-cov-2. in contrast to taiwan and vietnam it seems that all other countries of the world were unprepared at the end of december 2019. a few of them took fairly systematic and strict measures that covered the entire population as soon as the first cases had declared themselves. for a quick overview see figure 2 . this was for example true for china at the end of january 2020, for slovakia and greece on the 27th and 28th february, for austria on the 10th march and for denmark on the 12th march. an alternative danish strategy, based on rigorous contact tracing and quarantine, but not implemented until now was argued for in [sia] . regarding the results the turbulent evolution in china is well known. in denmark, with a population of 5.806 million, about 12,000 cases had been confirmed and 593 deaths recorded, and the corresponding figures for austria are 8.86 million people, 16,979 cases and 672 deaths. the confrontation of slovakia, a country of around 5.5 million inhabitants, with greece, which counts 10.72 million people, is particularly striking because it makes visible the role of their physicians and hospitals. in slovakia there were 1,528 confirmed cases and 28 deaths. the corresponding data for greece are 3,058 and 183. the relatively much higher number of fatalities in greece, in spite of equally early reaction and almost the same number of cases per number of inhabitants, is no doubt due to the catastrophic state of its medical-clinical system caused mainly by the debt crisis from 2010 on. next we pass to a group of countries that reacted late and not systematically, applying the various measures in a haphazard way and only to part of the population. here are some of them with their numbers of inhabitants in million, cumulated numbers of confirmed cases and numbers of fatalities: the relatively low number of deaths in germany reflects mainly a sufficient medical-clinical system that could readily adapt itself to the epidemic. the opposite was true in france. there, about 100,000 hospital beds had been eliminated in the period between 1993 and 2018. an arbitrary strict "confinement" not determined by epidemiologic reasoning was imposed on the 17th march. finally there are countries that decided to do nothing, at least for a long while. their motivation, or pretext, was above all a belief in herd immunity (sects. this overview of strategies confirms that, as said in the introduction, the results depend indeed heavily on the state of public health. note that nowadays in every language of the world the concept "public health" is designated by a literal translation or a slight modification of this expression. for instance in danish it is "folkesundhed", that is, "health of the people". in this second part we shall sketch the scientific and in particular mathematical principles involved in the study of successive stages of the pandemics. in short: sect. 4: discovery of the new virus, basic properties, testing for its presence in a person. 5 and 6: data on the evolution of covid-19 in a population. 7: attempts at analysing mathematically and predicting such an evolution by representing it by an epidemic curve. 8: the analogous for a representation by a compartmental model. 9: trying to stop the epidemic by a vaccine. 10: what to learn and what to do? after the often-depicted outbreak in late december 2019 of cases of pneumonia of unknown aetiology around wuhan, in the course of january 2020 chinese scientists identified a new virus as the pathogen. they followed the usual procedures, i.e., they determined the load of 26 common respiratory pathogens in the patients. they found none of them in abundance. they suspected sars-cov but could not find it either. then they investigated all kinds of viral load that had a slight similarity (coincidence in a number of genomes) with sars-cov and detected a novel virus which displayed abundant virions in respiratory specimens from patients. electron microscopy and mathematical pattern analysis [mum, pev] showed that it belongs to the same species as sars-cov-1 and mers-cov (sect. 2); hence the name sars-cov-2. starting with this work in china a large number of publications about the peculiar properties of the pathogen and the ways it is acting have appeared. on the virological side its genetic sequence was determined. the new virus is believed to have zoonotic origins but human to human infection was rapidly established. the combination of sars and influenza features, that is intensive respiratory inhibition of patients and rapid transmission, make covid-19, the disease caused by sars-cov-2, particularly dangerous. for further work see [and] . in the clinical context, several periods in the evolution of a case were determined (see their definition in [kpp, sect. 5 .2]): the median incubation period is 5.2 days; the mean latency period is 4.6 days, i.e., in general the infectious period starts indeed before the prodromal phase. we have discussed the implications in sect. 2 by comparison with ebola. the mean length of the infectious period is 6 days for mild and asymptomatic cases; for severe and critical cases this period lasts on the average 22 days and ends only by recovery or death. the manifold applications to the control of the pandemic of both their virological and their clinical characteristics will appear in sects. 7, 8, 9 and 10. their study is still active and may even reverse former results; this happened for example recently about so-called cross-immunities. however, in this article we shall only treat applications to the basic element of well-designed control strategies, namely testing for infections. the first step of a test programme is to define the target population. who will be tested? subjects who had a contact with infected people? or those who complain about symptoms? or everybody coming from a region where cases exist? see the example of vietnam in sect. 3. next, what will be the objective? to discover the presence of the virus or that of some kind of antibodies? depending on the objective there exist virological and serological tests. the usual virological test is called the pcr-(polymerase chain reaction) test. dozens of serologic tests of varying quality have been and still are developed and even offered in some countries to the general public. recall that the characterization of a test with a given target population and a given objective is a classical subject of clinical epidemiology [kpp, sect. 19.2] . coming back to the fundamental role of testing in control strategies we only remark that in poor countries or in rich countries with inattentive public health officials, the target population was often determined by the shortage of test kits and by the influence of institutions that required them for themselves. this is classical medical statistics, which gives for a specific disease the number of cases and deaths together with the when and where and a few additional data such as sex, age and sometimes profession of the subjects. in principle the methods for finding the number of confirmed cases and of fatalities by covid-19 are the same as for any other disease. they fluctuate widely between countries. both the diagnosis of a case of a disease and the description of the cause of a death may be relatively correct or most unreliable. in particular finding a correct diagnosis for somebody who complains about acute health problems depends very much on the local contact tracing methods and on the state of the clinical-medical system. an additional difficulty arises form the existence of asymptomatic forms of the disease, that is, subjects infected by sars-cov-2 who display no symptoms. in sect. 3 we have mentioned vietnam, which uses its normal demographic and health information systems [kkr] . it includes in its statistics asymptomatic cases found by contact tracing. other countries obtain their morbidity and mortality data from a "health reporting system". such a system is partly based on sampling methods from various sources, for example hospitals and local health offices. in germany the robert koch-institute, a central institute mainly devoted to infectious diseases, reports on the results for covid-19. in the usa the johns hopkins university plays a similar role. still other countries use data from health insurance offices. however, many countries have neither a health information system nor a health reporting system, or they do not use it for covid-19. a host of alternative methods is being employed. for example france counts only hospitalized confirmed cases and only deaths which happen in a hospital or in a retirement home that is connected with a medical structure. summing up we may say that morbidity data and to a lesser degree mortality data for covid-19 that one finds in various periodic publications are fairly unreliable, with very few exceptions. the sources are not always clearly indicated. an important alternative idea is to compare the present situation with that in years past. speaking again naïvely we assume that the present higher case frequencies and death tolls, and only these, are the result of covid-19. given the diagnostic difficulties mentioned above this idea is mainly applied to fatalities and hardly to nonlethal cases. thus in the method of "excess mortality" we only measure how many more deaths by any cause happened this year than in the corresponding period in the past. for the uk we have for instance quoted in sect. 3 the figure of 41,128 deaths up to the 1st june as supplied by the national health service. by contrast the national statistical office advanced about 62,000 deaths as excess mortality! fig. 2 estimated number of infections on lock-down day and excess mortality for 16 selected countries. reproduced from [fit] , permission granted by finally, here is an interesting idea based on the most classical form of a statistical-mathematical model. a graphic in the paper [fit] (see figure 2 ) shows for every one of 16 selected countries the point in the plane whose coordinates are, respectively, the estimated number of infections per million inhabitants on lockdown day and the excess mortality. a short glance convinces us that they are positively correlated. a simple regression analysis based on this graphic would allow us to estimate one of these values by the other one for any other country, too. it goes in several directions beyond classical health statistics, all of them relevant to covid-19, too. firstly, sample surveys are conducted instead of using the data from the entire "target population". they have for example been used to study the influence of social factors on the evolution of various aspects of the disease. in particular the factor "to be an immigrant or to descend from them" was thoroughly investigated in some countries. secondly, more types of data about cases and deaths are collected, for example about morbidity and mortality by age groups. thirdly, data sets are not only being registered and perhaps published but also transformed and interpreted in various ways. here, standardization is the best-known procedure. a fictitious example would be the number of fatalities by covid-19 in denmark if denmark had the same age structure as vietnam and in each age group it had the same covid-19 mortality as in the same age group in vietnam. in sect. 9 we shall meet statistical-mathematical models as a basic mathematical tool in developing a preventive treatment of covid-19. with their help one studies in a clinical trial the influence of various factors on some outcome variable e of interest. here the idea of "controlling" for the influence of another factor, which might be a "confounder" in the study of the action of e , plays a role. it looks as if most demographers on the one hand, and most clinical epidemiologists on the other, ignore that the mathematical procedure of standardizing is the same as that of controlling for a confounder [kpp, lesson 21] . a mathematician will not be astonished, though! we have mentioned this classical concept in sect. 1; see [kpp, sect. 4.6 ]. let c be an epidemic, v a geographical region, t0 a moment of time which may be that of the first case of c in v , and f(t) for t ≥ t0 the number of observed and reported cases of c that had declared themselves in v before or at the instant t . then f is called the epidemic curve of c in v . in particular it needs to be said whether unconfirmed cases are included or not. measuring f(t) as the time t goes along is the task of the relevant demographic services (sects. 5 and 6). this process is therefore subject to all the deficiencies listed there. to get some knowledge about f for various regions v is of course one of the main concerns of the population of a country invaded by c . such knowledge is equally vital for health authorities who attempt to control c . however, much more knowledge is desirable. what can we learn about the mechanism of c by observing f(t) ? this was already the subject of the papers described in [fin] ; see sect. 1. in particular, is there a way to predict aspects of the future evolution of f , having observed the values f(t) for a while? answers to these questions are generally given by modelling f , that is by making certain assumptions about its shape and by estimating certain parameters in it. a very large number of papers was published about this issue. some of them use extrapolation methods known from mathematical economy. a recent survey on basic ideas and techniques can be found in [krm] where a model is described in terms of an integro-differential equation. we shall restrict ourselves to a discussion of an application, namely the so-called basic reproduction number r0 . it appears constantly in popular publications. to define it let us look at a subject s that is infected at a time t* ≥ t0 . let µ(s, t*) be the number of all subjects infected by s after t* in the form of secondary, tertiary etc. infections. then r0 is the average of µ(s, t*) over all s . thus it depends on t*. it is precisely this dependence in which people are interested: a value less than 1 is looked upon as predictor of the extinction of c after t*. in the case c = covid-19, values as high as 5.7 had been estimated in the beginning, that is, for t* close to the time of the first outbreak of c. the article [sia] presents an interesting factorisation of r0 in order to compare different approaches to control the size of it. we have sketched their historical origin in sect. 1. we distinguished between two ways of mathematically modelling the evolution of an epidemic. models of the first kind (sect. 7) represent the temporal evolution of the number of subjects in a certain state, for instance the state "to be infected". by contrast, compartmental models also represent changes of this state at some moments in the form of transitions of a subject from one compartment to another one. the sir-model, which we designated in sect. 1 as "intensively studied in the 1920s", is particularly simple and has served as a paragon for many others, in particular for those applied to covid-19. it involves three compartments: s are the susceptible, not yet infected subjects, i the infected ones, and r consists of subjects removed by recovery with immunity or death. the transitions between compartments are described by differential equations for the numbers s(t), i(t) and r(t) of subjects in the compartments as a function of time t . they involve certain parameters such as transition probabilities from one compartment to another one. under various assumptions the resulting system of differential equations for s , i and r can be solved explicitly or numerically. a first important application is to estimate the basic reproduction number r0 defined in sect. 7. it can be expressed by the basic parameters. secondly, it turns out that the limit s∞ of s(t) for t →∞ is strictly positive, which means that a certain part of the population will never be infected. this led to the concept of herd immunity, which, however, gave rise to much confusion among people who thought they had something to say about the matter. after the outbreak of covid-19 many more involved compartmental models were defined and analysed. their parameters represented among other features the underlying control strategy to be used. there was for instance the "do nothing" strategy and also the "mitigation" strategy, which consisted of the less stringent components of the "surveillancecontainment strategy" defined in sect. 3. in the much discussed paper [fer] neil ferguson and collaborators described the shape of the function i, that is the number of infected subjects, for the "do nothing" strategy. from the value 0 on it increases, reaches a maximum, decreases and finally reaches 0 at a certain moment thappy . this had apparently motivated the countries uk, usa, sweden and brazil to adopt this strategy for too long, ignoring that ferguson predicted (see figure 3 ) about 500,000 deaths caused by the epidemic in the uk and 2.2 million in the usa before extinction at the moment thappy. expected deaths caused by the epidemic for the do-nothing strategy, reproduced from [fer] with permission of school of public health, imperial college london at present compartmental models play hardly any practical role, mainly because they contain too many unknown parameters. some parameters such as infectivity are estimated with the help of a model of the epidemic curve, which seems to be a not very successful detour. it will hardly surprise that several pharmacological companies have started a run for developing curative and preventive treatments of various ailments, which sars-cov-2 may inflict on a person. up to now no curative treatment was found. there are only the wellknown methods to be used in the treatment of non-specific aspects of a case such as reducing pain, facilitate breathing or shorten the time to recovery by a antiviral drug. we shall therefore restrict ourselves to preventive treatments, that is, to immunizations. the objective of an immunization by a vaccine against a covid-19 connected health deficiency needs to be defined in the same way as for any other infectious disease. first the target population needs to be determined: whom do we intend to protect? next, what are the health deficiencies we want to prevent? for how long is the preventive effect to last? this is a particularly important aspect of the vaccine but is usually suppressed when a new one is announced. for instance the measles vaccination remains lifelong active in most subjects. for covid-19 the company which tries to develop the vaccine may be satisfied with a few months, hoping that sars-cov-2 will have disappeared after that. finally the efficacy needs to be found, which represents the part of the target population actually protected. it may also be defined in epidemiologic terms by regarding as "exposed" all subjects that had not obtained the treatment. then the efficacy is the "aetiological fraction among the exposed subjects". nowadays there is general agreement that the process of developing a vaccine against an infectious disease needs to run along a well-defined common line [kpp, lesson 18, and hal] . this ought to hold for covid-19, too, and we shall therefore recall it here. first, one or several substances are selected which, for some reasons whatsoever, usually virological ones, look like possible candidates for a vaccine. each of them needs to be submitted to a "clinical trial" in order to explore its most important properties. such a clinical trial consists of three "phases" i, ii and iii. phase i deals with various mainly pharmacologic aspects such as side effects for various possible dosages. statistical-mathematical models are the essential tools of the phases ii and iii. phase ii aims at providing a first idea of the efficacy of the selected vaccine. thus a relatively small target population is built artificially. here two basic problems arise. the first is the definition of the outcome variable of interest. often only the "immunogenicity" is being studied, which means the formation of antibodies, but not protection against the disease. it is a particularly complex and manifold problem in the case of covid-19. secondly, the target population needs to include among the vaccinated subjects a sufficient number of people who would attract the disease when not vaccinated. since covid-19 morbidity in the entire population of a country is small, such a group must be constructed by "challenge", that is, by infecting its members artificially. they are usually volunteers and their risk of dying is small except in the age groups where the lethality by the disease is high, that is, in the case of covid-19, for old people. faced with this ethical problem the usa used, for various previous infectious diseases, prison inmates whose terms were shortened as a reward. there was a time when vietnam, while developing a certain vaccine, sent its samples for the phase ii trial to the usa to be tested in this way because vietnamese ethical standards forbade all kinds of challenge. there are usually several phase ii trials in order to select the potential vaccine to be finally studied in a phase iii trial. this is a field trial in the sense that a sample of subjects is drawn from the entire population of interest, for instance from among all inhabitants of a country within a certain age group. the outcome variable is not immunogenicity but protection against the disease in the sense of the desired efficacy. the size of the sample is determined beforehand by the precision of the intended estimate of the efficacy. as noted above the decision about the duration of the trial is a crucial element. if high efficacy during the first two weeks after vaccination is considered sufficient, the trial may be stopped after two weeks; this philosophy underlies the vaccinations against the seasonal influenza. if we are interested in its efficacy during the first ten years after vaccination, it must last ten years. this has, in addition to other problems, caused the long delay in developing an ebola vaccine (end of sect. 2). we hope that it will not be glossed over by those who are trying to sell a covid-19 vaccine very soon. the pandemic has functioned like a magnifying glass. in some places, it showed a basically well-functioning society. in other places it revealed scandals and intolerable social inequalities. in particular it reflected the state of a country's public health system. the present article aimed at describing the role of mathematics in the pandemic. as said above there are two parts to this "outlook". let us take up the first one, namely: what can be learnt from the epidemic? in sect. 1 we gave an overview of the main branches of mathematics that play a role. then the sects. 4 -9 sketched the most frequent applications; their titles and their order correspond vaguely to the branches of mathematics concerned. thus there were mathematical pattern analysis in laboratory work and statistical-mathematical models in judging the quality of tests; demographic methods in the collection of data; different ways to model the evolution of the pandemic mathematically; and clinical epidemiology in attempts to develop a vaccine. in this way the article aimed at clarifying the potential role of mathematics in making decisions. on the one hand it turned out that in practise the role of epidemic curve or compartmental models is much more restricted than advertised in many publications. decisions based on them may even have disastrous consequences, for instance those based on the mathematical concept of herd immunity. thus blind trust in mathematical arguments is unjustified. on the other hand denying the existence of a valid mathematicalscientific foundation for a control strategy is just as detrimental. it was done in denmark with the "tracing and lock-down" strategy by a report of an "expert group" of health academics and officials, which reflected the interests of medical, industrial and governmental circles. this comment leads us to the second part of our "outlook", namely: what to do in the future? the authors of the present article started it in early may by "since the covid-19 pandemic is not over ...". while we are finally finishing our work in the middle of july, it is still not over! it is even very active but has taken a largely different form. hence it seems natural to analyse its present characteristics in the light of the facts we have described in the sections 4 -9 above and to ask ourselves: which lessons can we draw regarding the control strategies to be applied now? covid-19 does no longer surge from a single source. it reappears in small or large regions of many parts of the world, which may be of various forms and extensions: a single home for the elderly in france, two districts in germany, a large city like beijing, an entire province in spain, or a whole country like new zealand. we shall call them "nests" to distinguish them from "clusters", which denote certain discrete sets of people. a precise follow-up of the evolution of cases in these nests meets with the manifold difficulties explained in sects. 5 and 6 and will not be repeated here. a first natural question to ask is, then: why do "active" nests persist and reappear? sect. 3 presented three components of successful control strategies: contact tracing; lock-down; masks. while contact tracing continues reluctantly, lock-down and wearing masks were widely abandoned, often as a result of governmental policies seeking popularity. next, what should be done? in the sections 7, 8 and 9 we have explained, using in particular mathematical arguments, in how far the strategies of control treated there suffer from serious drawbacks. this leaves us with the combination of two measures: inside a nest a rigorous lock-down such as social distancing and preventing larger assemblies of people; at its borders: closing them or only allowing passage when combined with quarantine. for example new zealand regarded as a single nest has taken such rigorous measures. as a result there are now no new cases, except two cases around the 14th july in "managed isolation facilities". other nests will act similarly, we hope. the proximal origin of sars-cov-2 the first epidemic model: a historical note on p.d. en'ko daniel bernoulli's epidemiological model revisited impact of nonpharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand. mrc centre for global infectious disease analysis john brownlee and the measurement of infectiousness: an historical study in epidemic theory the risks of lifting lockdowns prematurely are very large design and analysis of vaccine studies principles of health information systems in developing countries key to public health a novel deterministic forecast model for covid-19 epidemic based on a single ordinary integro-differential equation pattern theory. the stochastic analysis of real-world signals an introduction to bioinformatics algorithms alternative corona strategy: we can beat the infection down for the count with quarantine and tracing of infectors the authors: bernhelm booß-bavnbek born in 1941, studied mathematics from 1960 to 1965 at bonn university. research, teaching and practical work first in econometrics and operations research and then in geometric analysis and membrane processes of cell physiology professor at several universities in europe and outside; research, teaching and practical work first in mathematics and then in epidemiology and public health. much of this was done in developing countries key: cord-018917-7px75s3c authors: hopkins, richard s.; magnuson, j. a. title: informatics in disease prevention and epidemiology date: 2013-07-29 journal: public health informatics and information systems doi: 10.1007/978-1-4471-4237-9_14 sha: doc_id: 18917 cord_uid: 7px75s3c this chapter provides a description of the components of disease prevention and control programs, and then focuses on information systems designed to support public health surveillance, epidemiologic investigation of cases and outbreaks, and case management. for each such system, we describe sources used to acquire necessary data for use by public health agencies, and the technology used to clean, manage, organize, and display the information. we discuss challenges and successes in sharing information among these various systems, and opportunities presented by emerging technologies. systems to support public health surveillance may support traditional passive case-reporting, as enhanced by electronic laboratory reporting and (emerging) direct reporting from electronic health records, and also a wide variety of different surveillance systems. we address syndromic surveillance and other novel approaches including registries for reporting and follow-up of cases of cancer, birth defects, lead poisoning, hepatitis b, etc., and population-based surveys (such as brfss or prams). systems to support epidemiologic investigation of outbreaks and clusters include generic tools such as excel, sas, spss, and r, and specialized tool-kits for epidemiologic analysis such as epi-info. in addition to supporting outbreak investigation, agencies also need systems to collect and manage summary information about outbreaks, investigations, and responses. systems to support case management, contact tracing, and case-based disease control interventions are often integrated to some degree with surveillance systems. we focus on opportunities and choices in the design and implementation of these systems. systems to support case management, contact tracing, and case-based disease control interventions are often integrated to some degree with surveillance systems. we focus on opportunities and choices in the design and implementation of these systems. public health programs to prevent disease typically have been designed and implemented one disease at a time. each disease has its own patterns of distribution in populations, risk factors, and optimal and practical intervention strategies that are effective in controlling, preventing, or even eliminating cases of the disease. for example, an important strategy to prevent measles is vaccination, the main strategy to prevent gonorrhea is antibiotic treatment of case contacts before they become ill themselves, an important strategy to prevent cervical cancer is screening with pap smears and treatment of preclinical disease, and the main strategy for prevention of neural tube defects is folic acid supplementation of selected foods. still, each disease prevention program's components are drawn from a relatively short list: • planning and evaluation • public health surveillance • outbreak or cluster recognition and response • policy and guidance development • clinical services -screening -immunization -prophylaxis -treatment • laboratory services • case-contact identifi cation and interventions • education and training for clinicians • public education • regulation (for example, of food services, drinking water, child-care centers, hospitals, etc.) • administration and fi nancial management ideally, program managers choose the most effective combination of these program components to prevent or control the disease or diseases they are charged with addressing. however, as this must be done within the constraints imposed by the available funds, cost-effectiveness is the usual criterion for choosing the preferred combination of program components. public health agencies typically are organized both by disease and by function. for example, each disease-specifi c program usually does not have its own laboratory, and a single public health clinical facility and its staff may provide varied services such as immunizations for well children, treatment of people with tuberculosis (tb) and their contacts, and pap smear services. to variable degrees, they may even combine activities in a single patient encounter, for example, testing women for gonorrhea and chlamydia trachomatis infections at the same visit where they get a pap smear, or offering hepatitis b vaccination during a visit for sexually transmitted diseases (std) treatment. as information technology has become more widely used in public health and replaced paper-based systems, it has typically been implemented program area by program area, as resources became available. this has led to the creation of information 'silos.' for example, laboratory information systems usually have developed in isolation from those to support clinical care or public health surveillance. information systems to support clinical operations of public health departments (for example, clinical services for stds, childhood immunizations, hiv/aids, tb, or family planning services) have characteristics similar to those of other electronic health record systems in ambulatory care. however, in some health departments, clinical information systems have been separated by disease or clinic. if one were to design information systems from scratch for a set of disease prevention programs, there would be potential savings and effi ciencies from identifying the ways that one program component depends on information from another, or can serve multiple programs, and then designing the system to provide that information seamlessly. one can identify potential effi ciencies from two perspectives: in reality, it is rare to have an opportunity to design such extensive information systems as a single project. one is dealing with numerous legacy systems that were designed to support program-specifi c workfl ows. so a key challenge for the public health informaticist is to help their agency make decisions about where information system 'integration' will yield substantial benefi ts and where it will not. for example, if it is desired to know (one time) how many people in the jurisdiction have been reported during a particular time interval with both syphilis and hepatitis b, one could do an ad hoc match of information in two independent surveillance information systems. this task might take an analyst a few days or weeks to accomplish -which is almost certainly inexpensive compared to the cost of building a new information system that could do this task almost immediately. for many purposes, it may be useful and suffi cient to be able to display multiple streams of surveillance or programmatic data in the same environment, on the same screen or even in the same chart. in florida, de-identifi ed reportable disease case information and death certifi cate information are imported into the essence analytic environment that was originally designed for syndromic surveillance [ 1 ] , so that trends for similar conditions by age, sex, and geographic area in the two data streams can be easily compared. on the other hand, if it is desired to have real-time information available to the std clinic staff about past diagnoses of hepatitis b, or about past receipt of hepatitis b vaccine, then information systems need to be designed to support this kind of look-up; the usual solution is a shared person index between the two systems. alternatively, a common data repository can be designed in which all information about each person is permanently linked. as mentioned earlier, there are a number of components common to disease control and prevention programs. in this chapter, we will address information systems designed to support the following: • public health surveillance • outbreak or cluster recognition and response • acquisition of laboratory information • case-contact identifi cation and intervention cdc defi nes public health surveillance as "the ongoing, systematic collection, analysis, and interpretation of health data, essential to the planning, implementation, and evaluation of public health practice, closely integrated with the dissemination of these data to those who need to know and linked to prevention and control" [ 2 ] . each word of this defi nition is carefully chosen, and has implications for the design of surveillance information systems. a one-time data collection activity is not surveillance. data collection for research purposes is not surveillance. surveillance data are collected to support public health action, and analyses and recommendations based on these data must be shared with those who provided the data and with others who need to know. objectives of surveillance systems differ at the local, state, and federal levels [ 3 ] . at the local level, immediate response to individual cases is relatively more important, while at the federal level the analysis of larger-scale patterns is the most important function of surveillance. for state health departments, both uses of surveillance data may be important, depending on the disease and the size of the state. public health surveillance systems may be based on data capture from a variety of sources, including case reports, population-based surveys, sentinel providers, electronic health records (including laboratory information management systems for elr and emergency department records for syndromic surveillance), or administrative data (like hospital or physician claims for reimbursement). for some noninfectious diseases, surveillance is carried out through registries (see below). information systems to support reportable disease surveillance contain records representing case reports that currently are, for the most part, entered manually into an application by public health staff, based on information received from doctors, infection control practitioners, hospitals, and laboratories. increasingly, the laboratory information in these records comes from electronic records transmitted by the public health laboratory, hospital laboratories, and commercial laboratories, when there is a positive result meeting certain reporting criteria (like a positive igm antibody test for hepatitis a). these records typically contain a combination of clinical, laboratory, and epidemiologic information about each case. in future, increasing proportions of these case reports will be entered directly into a website by the practitioner creating the case report, or be transmitted electronically from the practitioner's electronic health record (ehr) system. currently almost half the states in the us use the cdc-provided nedss base system (nbs) as their platform for managing case reports. the remainder use either a system developed in-house or one of several commercially-available solutions [ 4 ] . in case-based surveillance practice, there is usually a relatively short list of required elements in the initial case report. for some diseases this is the only information received on all cases. for other diseases, usually of more importance and with lower case numbers, an additional data collection form is initiated by the receiving health department, which gathers information as appropriate from the ill person, the treating physician, and health records. the optimum amount of information to collect in the initial case report, as opposed to the disease-specifi c case report form, is a matter of judgment and may change as technology changes. in a largely manual system, health departments typically desire to minimize barriers to reporting of cases, so the incentive is to keep the initial case report form short. if much of the information desired for the disease-specifi c case report form can in fact be extracted from an electronic medical record with no additional effort by the person making an electronic case report, then the balance changes. careful decisions are needed: for which cases of which diseases are follow-up interviews necessary [ 5 ] ? until very recently, virtually all of the case-based surveillance information used at the federal level was collected initially at the local (or sometimes state) level, where it was used in the fi rst instance for local response. as the case report information passes from the local to the state to the federal level, it is subjected to validation and cleaning: cases not meeting the surveillance case defi nition have been removed from the data submitted to the federal level, missing data have been fi lled in to the extent possible, and cases have been classifi ed as to whether they are confi rmed, probable, or suspected using standard national surveillance case defi nitions (these case defi nitions are developed by the council of state and territorial epidemiologists in consultation with cdc) [ 6 ] . more recently, advances in technology have allowed case reports, and the information on which they are based, to move almost instantaneously from electronic health record systems, maintained by doctors, hospitals, and laboratories, to public health authorities. there are no technical barriers to these data being available at the federal level essentially as early as they are at the local and state levels. this ready availability of unfi ltered clinical information may allow more rapid awareness by public health offi cials at all levels of individual cases of high-priority diseases (like botulism or hemorrhagic fevers like ebola virus infection), and thus lead to more rapid detection and characterization of likely outbreaks. the simultaneous availability of raw data to multiple agencies at different levels of government also presents certain challenges. the user at the local level will have ready access to information from many sources about local conditions and events, and can use this information to interpret local observations. they will be in a position to understand when an apparent anomaly in their surveillance data is due to an artifact or to local conditions that are not a cause for alarm. they will also know whether a problem is already under investigation. a user at a state or federal level will be able to see patterns over a larger area, and thus may be able to identify multijurisdictional outbreaks, patterns, or trends that are not evident at a local level. the fact that several users may be examining the same raw data at the same time requires that these multiple users be in frequent communication about what they are seeing in their data and which apparent anomalies are already explained or need further investigation. there is a danger that users at a higher level may prematurely disseminate or act on information that, while based on facts, is incomplete or misleading. similarly, users at a local level may not realize that what they are seeing is part of a larger phenomenon. in the syndromic surveillance domain, the biosense 2.0 governance group [ 7 ] has adopted a set of etiquette principles which participating jurisdictions will be required to agree to, that spell out the mutual obligations of analysts at each level of the system (scott gordon , association of state and territorial health offi cials, 2013, personal communication). from an information management perspective, an important question is where to put human review of case reports in this information fl ow. for example, it is becoming technically possible for likely cases of reportable diseases to be recognized automatically in health care electronic record systems. some of these could be passed on to public health authorities without human review, in the same way that reportable laboratory results are already passed on in electronic laboratory reporting (elr). for which constellations of fi ndings in the electronic health record would this be appropriate? should some electronic case reports generated by electronic health record systems be passed to state or even federal public health offi cials before they are reviewed and validated at the local or state levels? if so, which ones? as always, there is a tension between the speed of information fl ow and its quality and completeness. there is a need for research to determine which constellations of fi ndings in electronic health records have adequate specifi city and sensitivity to warrant automated identifi cation of a person as being likely to have a case of a reportable disease. the acceptable sensitivity and specifi city will vary by disease. in 2001, cdc published the updated guidelines for evaluating public health surveillance systems [ 8 ] . this document identifi es a set of key attributes of surveillance systems to be assessed during a surveillance system evaluation, including simplicity, fl exibility, data quality, acceptability, sensitivity, predictive value positive, representativeness, timeliness, and stability. these are also useful attributes to consider when designing a surveillance information system [ 9 ] . the relative importance of these attributes will vary depending on the condition under surveillance and the main purposes for surveillance. for example, a surveillance system to detect cases of botulism for immediate public health response puts a high premium on timeliness, and its operators are likely to be willing to accept a modest number of false-positive reports (a lower positive predictive value ) in order to assure that reports are received very quickly. on the other hand, surveillance to support planning of cancer prevention programs and treatment services is less time-sensitive, given the quite long incubation periods for most cancers, and therefore more concerned with diagnostic accuracy of every case report than with speed of reporting. timeliness, positive predictive value, and sensitivity of a public health surveillance system are always in tension with each other; increasing two of these always compromises the third. in systems based on case-reporting from doctors, hospitals, and laboratories, and receipt of electronic health records from these same organizations, records for an individual can in principle be linked with records for that same individual in numerous public health information systems, including those supporting clinical service, immunization registries, case investigation, partner or contact identifi cation, partner or contact notifi cation, and provision of interventions to partners or contacts. sometimes this will be done best by automated messaging of structured data from one system to another, sometimes by supporting real-time look-up capabilities, and sometimes by development of a master person index to underlie some or all of these applications. one key decision is which application to consider as the hub for this information sharing, for example, the surveillance application itself or a clinical application. surveillance systems that are based on sample surveys (such as the behavioral risk factor surveillance system, brfss [ 10 ] ), on sentinel practices (such as ili-net for surveillance of infl uenza-like illness [ 11 ] ) or on syndromic surveillance do not have individual patient identifi ers, and so intrinsically cannot be linked at the individual level to information systems supporting other disease control program components. their data are typically managed in systems built on standard statistical software packages, or other independent systems. syndromic surveillance systems are based on rapid acquisition of unfi ltered, real-time, electronic records without individual identifi ers from hospital emergency rooms [ 12 ] and urgent care centers, and also, increasingly, from outpatient physicians' offi ces and from hospital admissions [ 13 ] . the primary purpose of these systems is to support detection and characterization of community disease outbreaks, as they are refl ected in care received at emergency departments, physicians' offi ces, or hospitals. each visit to an emergency department is assigned to a category or syndrome , based on words and strings contained in the patient's chief complaint and/or the triage nurse's notes. as the records received by the health department do not have individual identifi ers, they cannot be linked to records in other information systems. however, records received by the syndromic surveillance system should contain unique identifi ers that could allow the epidemiologist analyzing the data to work back through the sending facility to an identifi ed clinical record. this traceback might become necessary if the person appeared to have a case of a reportable disease or to be part of a signifi cant outbreak. adding outpatient visits and hospital admissions to the scope of syndromic surveillance is opening up additional uses for this technology, especially in the areas of real-time non-infectious disease surveillance. surveillance for cancers [ 14 ] , stroke [ 15 ] , birth defects [ 16 ] , and some other chronic diseases like amyotrophic lateral sclerosis (als) is carried out through registries. registries are usually established by specifi c legislation, and typically relate to a single topic -for example a registry of records for a disease, or of immunization records. registries may be restricted to a geographic region. a distinctive feature of registries is that individual case reports are kept open for long periods of time, up to several or many years, allowing additional information about treatment, hospitalization, and death or other outcomes to be added. registries thus serve as systems to monitor type, duration, and outcome of treatment for these diseases, in addition to the occurrence of new cases of disease (disease incidence ). they may also support outreach efforts to patients or their families, as a way to document that appropriate steps have been taken to link patients to needed types and sources of care. most cases recorded in state-level cancer registries are acquired from hospitallevel registries, using an electronic case report in a standardized format [ 17 ] . some case abstracts are obtained directly by registry personnel or contractors, when hospitals do not have suitable registries of their own. case reports require extensive review and abstraction of medical records by trained workers. birth defect registries may also be built by active search for cases in hospital and other medical records, and abstraction of those records to make case reports. they also may be built by electronically linking records from vital statistics (birth and death records), centralized hospital discharge record systems, and clinical service providers for children with birth defects (such as state programs for children with special medical needs) [ 18 ] . the latter are much less expensive to develop but cannot be assumed to have captured all cases of the disease under surveillance, or captured them correctly [ 19 ] . a disease outbreak is defi ned as a number of cases greater than the number expected during a particular time interval in a geographic area or population. this term usually is used for events due to infectious diseases, and sometimes for those of toxic origin. a similar increase above expected numbers for a non-infectious disease, such as birth defects or cancer, is usually called a cluster . outbreaks and clusters may be due to diseases for which individual cases are reportable (like shigellosis or breast cancer), or diseases for which they are not (like food poisoning due to staphylococcal or clostridium perfringens toxins in most states, sars when it was new, or multiple sclerosis). surveillance systems are designed to facilitate recognition of outbreaks or clusters by frequent examination of the most current information available. the design of the user interface is particularly important. the interface should allow users to: fl exibly display line lists, bar charts by date of event (epidemic curves), and maps of location of cases; fl exibly select subsets of cases for display; apply appropriate statistical tests to detect improbable increases in case counts; and display multiple streams of data on the same chart. for example, users may want to display the epidemic curve of an infl uenza outbreak for several different regions of a state or for several different age groups, or to display counts of positive infl uenza tests and emergency department visits for infl uenza-like illness on the same graph with different scales for each. syndromic surveillance systems have been leaders in developing and evaluating statistical algorithms for automated detection of anomalies which may, on investigation, turn out to be outbreaks. such algorithms have less frequently been applied for automated detection of possible outbreaks or clusters in reportable disease data streams. most outbreaks and clusters are in fact not recognized by examination of regularly-collected surveillance system data. instead, they are recognized by private citizens (such as the organizer of a social event, a teacher or school nurse, the manager of a child care center, the manager of a food service facility, an employer, or the ill people themselves) or by practicing doctors, and brought to public health attention via a phone call or e-mail or entry on a web site established for the purpose [ 20 ] . public health workers assess the information and make the decision whether or not to do a formal investigation of the outbreak. one part of such an assessment is to look at available streams of surveillance data and determine whether there is information supporting the occurrence of an outbreak. for example, a report of a possible infl uenza outbreak in a high school might prompt closer examination of syndromic surveillance data from nearby hospital emergency departments to determine whether there is a more general increase in visits for infl uenza-like illness. a report of a neighborhood cluster of brain cancers would prompt closer examination of available cancer registry information, which might or might not support an interim conclusion that such a cluster is real and statistically signifi cant. in order to be accountable for the effectiveness of their work, local and state health departments need to track the occurrence of outbreaks and the public health response to those outbreaks. since outbreaks can be due to reportable or nonreportable diseases, this cannot be done only by actions such as identifying some cases in the reportable disease data system as being part of an outbreak. systems to track the occurrence of outbreaks need to document the following: • time and date the fi rst and last cases occurred • total (estimated or counted) number of cases • population group most affected (by age, sex, location) • setting of the outbreak (school, workplace, restaurant, wedding, etc.) • suspected or confi rmed agent • most common clinical presentation • suspected or confi rmed source and mode of spread • methods used to investigate agent, source and mode of spread • control measures recommended • control measures implemented • lessons learned for prevention of future outbreaks and improved investigation and response in future events this information about outbreaks should be stored for ready retrieval, and to serve as a basis for quality improvement efforts. for quality improvement purposes, it is also helpful to document the content of the summary report written about each outbreak. when the outbreak is due to a reportable disease, individual cases in the reportable disease surveillance information system can be linked to the outbreak, for example by having an outbreak identifi er attached to their records. if preliminary information about outbreaks in a jurisdiction is entered into the outbreak information system in real time, as the investigation is proceeding, and if the outbreak database is readily searchable by all communicable disease investigators in the jurisdiction, then local investigators can use the outbreak database to help them with investigations of new illness or outbreak complaints [ 21 ] . for example, if they receive a complaint that illness has occurred in people who consumed a particular food product, they can look in the database and determine whether other recent or current complaints or outbreaks mention the same food product. if they receive a report about a gastroenteritis outbreak in a childcare center, they can determine what agents have been found to be responsible for recent or current similar outbreaks in nearby communities; this can help focus their laboratory testing and initial control strategies. some us states have had long-standing systems to document all outbreaks investigated by local or state personnel, but others have not. a major variable in the design of such systems is the state-local division of responsibilities in each state, including the degree of state oversight of 'routine' local outbreak investigations. the actual investigation of an outbreak or cluster may involve enhanced "active" case-fi nding, use of case-report forms, group surveys, and formal epidemiologic studies. active case-fi nding involves regular solicitation of case reports from doctors, hospitals, and laboratories. managing the reports of possible, probable, and confi rmed cases that are part of the outbreak is an important task. for a reportable disease, the jurisdiction's reportable disease surveillance system may be adequate to manage reported cases. it may be necessary, however, to create a continuouslyupdated line list of possible cases and their current status, which is outside the scope of the standard reportable disease application. outbreak investigation surveys will typically involve interviewing everyone with a possible exposure (like all attendees of a wedding reception), whether they were ill or not. formal studies may involve interviewing selected non-ill people, for example, as part of a case-control study. the investigation may also involve obtaining and sending to a laboratory a large number of specimens from ill persons, and sometimes from exposed non-ill persons and from environmental sources (food, water, air, soil, etc.). managing these disparate types of information is a challenge, especially in a large outbreak or one involving multiple jurisdictions. there is currently no one widely-accepted and satisfactory way to manage data in such settings. each investigation team typically uses the tools it is most familiar with, including some combination of data management tools like ms excel, ms access, or epiinfo [ 22 ] , and standard statistical packages. many health departments maintain libraries of standard questionnaires with associated empty data bases, for use during outbreak investigations. when cdc is involved in a multistate outbreak, the investigation team at the local or state level needs to be able to produce and transmit timely case report and other information in the format desired by cdc. the services of an experienced public health informaticist can be extremely helpful to the investigation team when outbreaks are large and multifocal. an ongoing challenge for cdc and the states is how to make the transition from specialized case reporting during an outbreak of a new disease, such as west nile virus encephalitis or sars, to routine case-based surveillance. if this transition is not well-managed, it is likely to result in the creation of a permanent stand-alone surveillance information system (or silo) for that disease. if the new disease is of national importance, cases should be made nationally notifi able and its surveillance should be incorporated into existing systems. laboratory information is a critical component of disease surveillance and prevention. laboratory data form the foundation of many surveillance systems. there are different types of laboratories involved in the public health data stream. laboratories providing data to public health fall into the general categories of commercial or private industry, hospital or clinical, and public health laboratories. public health laboratory information systems (lis) contain information about test results on specimens submitted for primary diagnosis, for confi rmation of a commercial or hospital laboratory's results, for identifi cation of unusual organisms, or for further characterization of organisms into subgroupings (like serotypes) that are of epidemiologic importance. in some states, all clinical laboratories must submit all isolates of certain organisms to the public health laboratory. many of the results obtained in a public health laboratory turn out to be for diseases that are not reportable and not targets of specifi c prevention programs. some of those results may, however, be for cases of non-reportable diseases that are historically rare in the jurisdiction but of great public health importance, or are new or newly-recognized. the main business of clinical laboratories (located both inside and outside hospitals) is to test specimens for pathogens or groups of pathogens specifi ed by the ordering physician, and return the results to the person who ordered the test. public health agencies have, since the early 1990s, asked or required such laboratories to also identify results meeting certain criteria (indicating the presence of a case of a reportable disease) and send a copy of the results to the public health agency for public health surveillance. initially, case reporting by laboratories was accomplished on paper forms, which were mailed or faxed to public health departments. some laboratories very soon moved to mailing printouts of relevant laboratory results, then to sending diskettes, then to transferring computerized fi les containing laboratory results by direct modem-to-modem transfer, and eventually to transferring such fi les via the internet using standard formats and vocabularies. in some states, public clinics (for example, std clinics) have used contract laboratories for their testing needs. in this situation, the outside laboratory supplies both positive and negative results to the public health agency, increasingly by transfer of electronic results in standard formats. laboratories provide data on reportable conditions to their local or state public health authority. reportable diseases are determined by each state; clinicians, hospitals, and/or laboratories must report to public health when these conditions are identifi ed. some reportable conditions are also nationally notifi able. deidentifi ed cases of these are voluntarily notifi ed by states and territories to cdc, which, in collaboration with the council of state and territorial epidemiologists, maintains a listing of nationally notifi able conditions that includes both infectious (e.g., rabies, tb) and non-infectious (e.g., blood lead, cancer) conditions [ 23 ] . the public health partnership with laboratories has led to the very successful and still increasing implementation of electronic laboratory reporting (elr) in the us. elr refers to the secure, electronic, standards-based reporting of laboratory data to public health. elr implementation has been steadily escalating since its inception around the year 2000, replacing previous reporting systems that relied on slower, more labor-intensive paper reporting. the elr national working group conducted annual surveys from 2004 to 2011 [ 24 ] which gathered data from all 50 states as well as from several territories and large metropolitan areas. these data were supplemented with data for years 2000-2004, retroactively gathered in the 2010 survey. the tracked growth of elr (fig. 14.1 ) illustrates its rapid rise in the us, from the start of early stage planning to fully operational elr [ 25 ] . the expected benefi ts of elr include more rapid reporting of reportable cases to public health departments, allowing faster recognition of priority cases and outbreaks for investigation and response, and thus more effective prevention and control [ 26 ] . elr also is expected to reduce the number of missed cases, as automated systems do not require laboratory staff to actively remember to make case reports, and to improve the item-level completeness and quality of case reports. although experience shows that the expected improvements in timeliness, sensitivity, completeness, and accuracy are generally being realized [ 27 ] , timeliness may not be improved substantially for those diseases where clinicians routinely report based on clinical suspicion without waiting for laboratory confi rmation (for example, meningococcal disease) [ 28 ] . in addition, laboratories (especially referral laboratories) often do not have access in their own information systems to home addresses for people whose specimens they are testing, and have struggled with providing complete demographic information to public health agencies. implementation of an operational elr system is not a trivial undertaking. laboratories must confi gure data into an acceptable message format, most commonly health level seven (hl7 ® ) [ 29 ] . laboratory tests and results should be reported with correlated vocabulary or content codes. two of the most common code systems used for laboratory tests and their associated results are logical observations identifi ers names and codes (loinc ® ) [ 30 ] and systematized nomenclature of medicine (snomed ct ® ) [ 31 ] . neither of these systems is suffi cient by itself to encode all the information needed for public health surveillance. public health jurisdictions have introduced elr to their partner laboratories using one or more of the following approaches: • the "charm" approach -relies on establishing goodwill and collaboration with laboratory partners. while this collegial approach is very appealing, it may be unable to overcome signifi cant barriers such as lack of laboratory funding or resources, and some facilities will supply data only in methods specifi cally required by law. • the incentive approach -involves offering either fi nancial or technical assistance to laboratory partners, assisting them in the startup process of elr. while this approach may be preferred by many laboratories, relatively few jurisdictions have the discretionary funds (or are able to receive federal assistance funds) to implement the approach. • the enforcement or legislative approach -requires reporting rules or legislation that requires laboratories to participate in elr. the most successful enforcement approach will include low-cost options for smaller laboratories, such as web data entry, so that they may benefi t from an elr -"lite" implementation [ 32 ] . the mainstreaming of elr systems in the us has pioneered a clear path forward for public health to begin maximizing its presence in the domain of electronic data interchange. at a local level, case reports for communicable diseases prompt action. although the specifi c action varies by disease, the general approach is the same. it starts with an interview of the ill person (or that person's parents or other surrogates) to determine who or what the person was in contact with in ways that facilitate transmission, both to determine a likely source of infection and to identify other people who may be at risk from exposure to this person. information systems to support contact tracing, partner notifi cation, and postexposure prophylaxis (for stds or tb, for example) contain records about all elicited contacts (exposed persons) for each reported case of the disease in question. these records contain information about each contact, such as whether they were located, whether they received post-exposure prophylaxis, and the results of any additional partner-elicitation interviews or clinical testing that were completed. information systems to support surveillance for other reportable diseases also increasingly contain information about what disease-appropriate action was taken in response to each case; such actions may include identifi cation of contacts, education of household members, vaccination or antibiotic prophylaxis of contacts, isolation of the case (including staying home from work or school), or quarantine of exposed people. std and tb information systems typically capture full locating information for contacts, and can be used both to support fi eld work and to generate statistics on effectiveness of partner notifi cation activities worker by worker and in the aggregate. systems for other reportable diseases may capture only the fact that various interventions were done, and the date that these were initiated. information about the timeliness of initiation of recommended control measures is now required as a performance measure for selected diseases by cdc's public health emergency preparedness cooperative agreement [ 33 ] . in the investigation of a case of meningococcal disease, contacts are people who had very close contact with the original person, for example a household member, boyfriend, or regular playmate. health department staff determines who the close contacts are. each will then be offered specifi c antibiotic treatment to prevent illness. for syphilis, contacts are people who have had sex with the original case. contacts will be examined by a clinician and assessed serologically to see if they are already infected, and offered appropriate prophylactic or curative antibiotic treatment. for measles, contacts may include anyone who spent even a few minutes in the same room as a case. contacts whose exposure was recent enough, and who are not fully immunized already, will receive a dose of measles-containing vaccine, and all contacts will be asked to self-isolate immediately if they develop symptoms of measles. in investigating a common-source outbreak of legionellosis, histoplasmosis, or anthrax, the local health department may want to locate everyone who had a specifi ed exposure to the apparent source of the infection. these exposed people may need antibiotic prophylaxis or may be advised to seek medical care promptly if they become ill. information systems to support this type of work typically have three purposes: 1. serve as a place for workers to record and look up information about people who are or may be contacts, and to track which contacts have and have not yet received needed interventions. 2. serve as a source of information for calculating indices of program or worker timeliness and performance, such as the average number of sexual contacts elicited per syphilis patient interviewed, or the percentage of measles contacts who were identifi ed in a timely way and who received post-exposure measles vaccine prophylaxis. 3. document the workload and effort put in by epidemiology and disease control fi eld staff it seems logical that the surveillance information system should serve as the basis for a system to support fi eld investigation, and this is often the case. the fact that the recommended interventions vary by disease makes designing a single system more complex. existing systems that track fi eld worker activities in detail are much more common for std and tb programs than for others. for general communicable disease fi eldwork, it is currently more common that the system simply documents which interventions were done and when, rather than use the application to track specifi c named contacts or exposed people. the public health informatics institute has published a detailed analysis [ 34 ] of the typical workfl ow involved in surveillance, investigation, and intervention for reportable diseases, and the corresponding information system requirements. the work group that phii convened had representatives of nine different state and local health departments, who were able to identify a large number of processes that were common to all nine jurisdictions, such as case-fi nding, case investigation, data analysis and visualization, monitoring and reporting, case/contact specifi c intervention, and others. these common processes can then serve as a basis for designing information systems to support case-reporting, surveillance, and case-based intervention work that are useable in multiple jurisdictions. consider existing or planned surveillance systems for multiple diseases and conditions. broadly, there are three functions in each of these systems -acquiring the raw data, cleaning and managing the data, and making the data available to users. each of these functions potentially can be integrated, to varying degrees. for example, multiple surveillance systems may benefi t from receiving electronic laboratory reports with a result indicating the presence of a case of a reportable disease. laboratories appreciate having a single set of instructions and a single destination for all their required reports, as this simplifi es their work. the laboratories then benefi t from the ability of the recipient health department to route the reports internally to the right surveillance information system. at the other end of the data pathway, users appreciate having a single interface with which to examine data about multiple conditions or diseases, using the same commands and defi nitions. the users do not have to understand how different surveillance information systems may internally code the same concept in different ways. they also appreciate being able to directly compare information that originally was submitted for the use of different program areas -for example, hepatitis b and gonorrhea in the same chart or table. in the short to medium term, it is not necessary to build a single integrated data repository or a master person index to achieve these goals, even if that is what one would have designed if one were starting from the beginning. however, if one wants to be able to see information about the same person that originates and is stored in multiple systems -for example, so that tb clinicians can see hiv data on their patients and vice versa -then an integrated data repository, or a master person index, or a query system that is extremely accurate in fi nding data on the right person, is needed. modifying existing systems to be able to carry out these functions is time consuming and expensive, so the business case and requirements need to be especially clear. florida's essence system: from syndromic surveillance to routine epidemiologic analysis across syndromic and nonsyndromic data sources (abstract) history of public health surveillance blueprint for a national public health surveillance system for the 21st century status of state electronic disease surveillance systems -united states prioritizing investigations of reported cases of selected enteric infections. paper presented at council of state and territorial epidemiologists nationally notifi able disease surveillance system case defi nitions association of state and territorial health offi cers. biosense 2.0 governance updated guidelines for evaluating public health surveillance systems, recommendations from the guidelines working group design and operation of local and state infectious disease surveillance systems oxford handbook of public health practice behavioral risk factor surveillance system overview of infl uenza surveillance in the united states international society for disease surveillance meaningful use workgroup. final recommendation: core processes and ehr requirements for public health syndromic surveillance electronic syndromic surveillance using hospital in patient and ambulatory clinical care electronic health record data national program of cancer registries coverdell national acute stroke registry surveillance -four states atlanta congenital defects program (macdp) north american association of central cancer registries, inc. (naaccr). implementation guidelines and recommendations report on birth defects in florida a comparison of two surveillance strategies for selected birth defects in florida online food and waterborne illness complaint form biosurveillance plan for human health, version 2.0. atlanta national notifi able diseases surveillance system (nndss), cdc. accessed at http:// wwwn.cdc.gov/nndss/ on available from www.coast2coastin-formatics.com national electronic laboratory reporting (elr) snapshot survey. available from www.coast2coastinformatics.com . cited statewide system of electronic notifi able disease reporting from clinical laboratories: comparing automated reporting with conventional methods a comparison of the completeness and timeliness of automated electronic laboratory reporting and spontaneous reporting of notifi able conditions potential effects of electronic laboratory reporting on improving timeliness of infectious disease notifi cation -florida health level seven (hl7 ® ) homepage. available at logical observation identifi ers names and codes (loinc ® ) systematized nomenclature of medicine-clinical terms (snomed ct ® ) see for example section 64d-3.031(5) of the florida administrative code: notifi cation by laboratories public health emergency preparedness cooperative agreement, budget period 1, performance measures specifi cation and implementation guidance, at-a-glance summary redesigning public health surveillance in an ehealth world on 31 1. what are some of the methods for surveillance besides case-reporting? 2. how are registries different from other surveillance information systems? 3. what are the advantages and disadvantages of building a master person index across surveillance information systems for multiple diseases? 4. what are the expected benefi ts of electronic laboratory reporting as a method to enhance surveillance? 5. what are the advantages and disadvantages of building a system to manage information about case contacts as part of the surveillance information system? 6. who determines for which diseases cases are nationally notifi able? key: cord-151030-5x3ztp1n authors: piasecki, tomasz; mucha, piotr b.; rosi'nska, magdalena title: a new seir type model including quarantine effects and its application to analysis of covid-19 pandemia in poland in march-april 2020 date: 2020-05-29 journal: nan doi: nan sha: doc_id: 151030 cord_uid: 5x3ztp1n contact tracing and quarantine are well established non-pharmaceutical epidemic control tools. the paper aims to clarify the impact of these measures in covid-19 epidemic. a new deterministic model is introduced (seirq: susceptible, exposed, infectious, removed, quarantined) with q compartment capturing individuals and releasing them with delay. we obtain a simple rule defining the reproduction number $mathcal{r}$ in terms of quarantine parameters, ratio of diagnosed cases and transmission parameters. the model is applied to the epidemic in poland in march april 2020, when social distancing measures were in place. we investigate 3 scenarios corresponding to different ratios of diagnosed cases. our results show that depending on the scenario contact tracing could have prevented from 50% to over 90% of cases. the effects of quarantine are limited by fraction of undiagnosed cases. taking into account the transmission intensity in poland prior to introduction of social restrictions it is unlikely that the control of the epidemic could be achieved without any social distancing measures. the epidemic of sars-cov-2 infection triggered an unprecedented public health response. given the lack of effective vaccine and treatment in 2020, this response included a variety of travel restrictions and social distancing measures [2] . while these measures help to slow down the epidemic they come at significant economical and societal cost [1] . as an alternative an approach focusing on rapid diagnosis is increasingly recommended [21] and prior to lifting social distancing measures largescale community testing should be in place [36] . testing efforts are complemented by identifying and quarantining contacts of the diagnosed cases. of note, by isolating the asymptomatic contacts from their social networks, this strategy takes into account the pre-symptomatic and asymptomatic spread of the infection [9, 10] , believed to be one of the key drivers of fast spread of covid-19. as an example, wide spread testing in general population followed by isolation of the infected helped to reduce covid-19 incidence by 90% in an italian village of vo'euganeo [14] . a modelling study in france offers similar conclusions arguing that relaxing social lock-down will be only feasible in case of extensive testing [22] . while there is already a number of studies estimating the effects of general social distancing measures [25, 2, 22, 26] , less is known about the impact of quarantine. hellewell et al. [27] investigated the potential of rapid isolation of cases and contact tracing to control the epidemic, finding that prohibitively high levels of timely contact tracing are necessary to achieve control. however, new technologies may offer sufficiently fast alternative to traditional contact tracing, in which case the epidemic could be still controlled by contact tracing [28] . our aim is to develop a seir-type model which incorporates the effects of quarantine and validate it in a setting in which measures to reduce contacts are in place. we apply it to investigate the role of quarantine in poland. the first case of covid-19 in poland was diagnosed on march 4th. social distancing measures were rapidly introduced during the week of 9 -13th march including closure of schools and universities, cancellation of mass events and closure of recreation facilities such as bars, restaurants, gyms etc. as well as shopping malls. religious gatherings were limited. finally, borders were closed for non-citizens [24] . these measures were fully in place on march 16th. further, beginning at march 25th restrictions on movement and travel were introduced (lockdown). wearing face covers became obligatory on april 14th. the restrictions were gradually lifted beginning at april 20th. we focus on modelling the time period when the social distancing measures were in place and then consider different scenarios of relaxation of the restrictions with possible improvement of testing and contact tracing. we note that the procedures for quarantine were in place even before the social distancing measures. they initially focused on individuals arriving from covid-19 affected areas in china. when the epidemic started spreading in european countries people who came back to poland from these countries were advised to immediately seek medical attention if they experienced any symptoms consistent with covid-19. however, adherence to these recommendations was not evaluated. as soon as the first case was diagnosed in poland quarantine for close contacts was also implemented. this paper aims to define a deterministic population model describing the epidemic in classical terms of susceptible, exposed, infectious, removed. in our model the quarantine becomes a separate state that removes individuals from susceptible and exposed states. we show that the reproductive number in our model is given by a simple formula referring to the parameters of transmission and transition, but also to parameters describing the quarantine. we demonstrate that in a real life scenario (case study of poland) the quarantine effectively reduces the growth of infectious compartment. increasing the efficiency of contact tracing and testing may may to some extent compensate lifting up the social distancing restrictions. we introduce a modification of the classical seir model including effects of quarantine. to underline importance of that extension we call it seirq. formally the model is described by a system of ordinary differential equations with delay dedicated to the quarantine. the following states are included in the model: s(t) -susceptible e(t) -exposed (infected, not infectious) i d (t) -infectious who will be diagnosed i u (t) -infectious who will not be diagnosed r d (t) -diagnosed and isolated r u (t) -spontaneously recovered without being diagnosed q(t) -quarantined the figure 2.1 presents the schematic representation of the model. a susceptible individual (state s), when becoming infected first moves to the state e, to model the initial period, when the infected individual is not yet infectious. next the cases progress to one of the infectious states i d , i u at the rates κσ and (1 − κ)σ, respectively. in general, moving through the i d pathway concerns these individuals who (independently of quarantine) would meet the testing criteria, as relevant to the local testing policy, e.g. testing of people with noticeable symptoms. we shall emphasize that from the point of view of analysis of spread of infection, the quantity i u shall be regarded rather as not recognized infections, not necessarily asymptomatic or mild. with this interpretation the value of κ can be influenced by intensity of testing. the creation of state e is via i d and i u with transmission rates β d and β u , respectively, normalized to the total population size n = s + e + i d + i u + r d + r u + q, which is assumed to be constant in time, births and deaths are neglected. the transition parameter σ is assumed identical for both groups, relating to the time between infection and becoming infectious. the infectious individuals then move to the state r d , which is the state of being diagnosed and isolated (and later recovered or deceased), with the rate γ d corresponding to the observed time between onset and diagnosis. on the other hand r u contains people who spontaneously recovered with rate γ u . our model includes an additional state of being quarantined (q). to mimic the situation of contact tracing, individuals can be put in quarantine from the state s (uninfected contacts) or the state e (infected contacts). these individuals stay in the quarantine for a predefined time period t . we assume that the number of people who will be quarantined depend on the number of individuals who are diagnosed. an average number of individuals quarantined per each diagnosed person is denoted as α. however, as the epidemic progresses some of the contacts could be identified among people who were already infected, but were not previously diagnosed, i.e. the state r u . we note that moving individuals between the states q and r u has no effect on the epidemic dynamics, therefore we assume that only individuals from s and e are quarantined and we reduce the average number of people put on quarantine by the factor s(t) s(t)+ru(t) . further, to acknowledge the capacity limits of the public health system to perform the contact tracing, we introduce a quantity k max , describing the maximum number of people who can be put in quarantine during one time step. we also assume that among the quarantined a proportion θ is infected. after the quarantine, the infected part θk(t − t ) goes to r d and the rest (1 − θ)k(t − t ) returns to s. taking all of the above into account, the model is described with the following seirq system: where k(t) = min{ s(t) s(t)+ru(t) αγ d i d (t), k max }, and α, β d , β u , γ d , γ u , θ, t ≥ 0. (2.1) we assume that the parameters α, β d , β u , θ, γ u and γ d depend on the country and time-specific public health interventions and may therefore change in time periods. due to proper interpretation of the equation on e we require that β d ≥ θαγ d to ensure positiveness of e. 2.2 basic reproductive number, critical transmission parameter β * . based on the general theory of seir type models [23] , we introduce the reproductive number it determines the stability of the system as r < 1 and instability for r > 1 (the growth/decrease of pandemia). this quantity not only explains the importance is testing (in terms of κ) and quarantine (in terms of α), but also gives an indication on levels of optimal testing and contact tracing. we underline that this formula works for the case when the capacity of the contact tracing has not been exceeded (k(t) < k max ). the details of derivation of (2.2) are provided in the appendix, section a.4. we shall emphasize the formal mathematical derivation holds for the case when i and e are small comparing to s. therefore the complete dynamics of the nonlinear system (2.1) is not fully determined by (2.2) . however in the regime of epidemic suppression, which is the case of covid-19 epidemic in poland, i and e are small compared to s and so the formula (2.2) reasonably prescribes spreading of infection in the population. the critical value r = 1 defines the level of transmission which is admissible, taking into account the existing quarantine policy, in order to control epidemic. as the level of transmission depends on the level of contacts, this provides information on the necessary level of social distancing measures. the formula (2.2) indicates that improving the contact tracing may compensate relaxation of contact restrictions. the key quantity is θα. indeed the system with the quarantine has the same stability properties as one without k, but with the new transmission rate β new d = β d − θαγ d . in order to guarantee the positiveness of e, β new d must be nonnegative. it generates the constraint the above condition also implies the theoretical maximal admissible level of quarantine. we define it by improving the targeting of the quarantine, i.e. by the highest possible level of θ: as long as the k max threshold is not exceeded the effect of the increase in θ or in α play the same role at the level of linearization (small i, e). however, in general it is not the case and for the purpose of our analysis we fix α. for our analysis we assume β d = β u = β. the reason is that, both i d and i u contain a mixture of asymptomatic and symptomatic cases and although there might be a difference we lack information to quantify this difference. then using formula (2.2) we compute critical values β * (κ, θ, α) defined as it shows the upper bound on transmission rate β which still guarantees the suppression of pandemic. we shall omit the dependence on γ d , γ u as these are fixed in our case, and denote briefly β * (κ, θ, α). in the case of maximal admissible quarantine (2.4) we obtain which can be regarded as theoretical upper bound for β if we assume "optimal admissible" quarantine for fixed κ, for which the epidemic could be still controlled. it must be kept in mind though that the condition (2.3) means that we are able to efficiently isolate all persons infected by every diagnosed, therefore is unrealistic. the resulting β * (θ max , κ) should be therefore considered as a theoretical limit for transmission rate. all simulations are performed using gnu octave (https://www.gnu.org/software/octave/). the underlying tool for all computations is a direct finite difference solver with a 1 day time step. basic assumptions for data fitting. we estimate the transmission rates β by fitting the model predictions to the data on the cumulative number of confirmed cases. since people with confirmed diagnosis are efficiently isolated, they are immediately included into r d . therefore, the quantity fitted to the data is r d (t). the crucial assumption behind our approach is that the parameter β changes twice during the period of analysis. the reason is that we can distinguish two important time points in the development of epidemic in poland. the first is initial restrictions including school closure effective march 12, which was accompanied with restrictions on other social activities. as we do not take migration into account in our model, we assume that the effect of border closing is reflected in β. the second turning point was a lockdown announced on march 25. for simplicity we comprise the effect of above measures in two jump changes in β in t ∈ {t 1 , t 2 } and choose t 1 = 14, t 2 = 28. with t = 1 corresponding to march 3 it means small delay with respect to the above dates which can be justified by the fact that new cases are reported with a delay of approximately 2 days. choice of fixed parameters (tab. 2.1). we assume that the parameters σ, γ u represent the natural course of infection and their values could be based on the existing literature. the parameter σ describes the rate of transition from non-infectious incubation state e into the infectious states i d or i u . the value of σ takes into account the incubation period and presymptomatic infectivity period. γ u relates to the period of infectivity, which we select based on the research regarding milder cases, assuming that serious cases are likely diagnosed. further, κ is a parameter related both to the proportion of asymptomatic infection and the local testing strategies. since the literature findings provide different possible figures, for κ we examine three different scenarios. parameters γ d , θ and α are fixed in our model for the purpose of data fitting, but informed by available data. one of the scenarios of future dynamics of the epidemic (section 3.3) considers possible increase of θ. parameter γ d was estimated basing on time from onset to diagnosis for diagnosed cases, and θ as rate of diagnosed among quarantined. furthermore we fix the parameter α by comparing the number of quarantined people obtained in simulations with actual data. the capacity level of public health services is set in terms of possible number of quarantined per day k max , as double the level observed so far. detailed justification of the values of fixed parameters collected in the following optimization algorithm. in order to fit the values β 1 , β 2 , β 3 we use a standard gradient descent algorithm. the error function is defined as mean square difference between the cumulative number of diagnoses and the r d (t) predicted from the model. for the initial values the error function is optimized only for a limited number of possible conditions, as these mostly impact β 1 , which is less relevant for future predictions. to estimate confidence intervals we use a method of parametric bootstrap. the optimisation procedures are described in the appendix, section a.1, where we also show precise errors of data fitting. dataset. the data series contains cumulative number of confirmed cases of covid-19 in poland from march 3 (first confirmed case in poland) till april 26, which amounts to 54 observations. the data are taken from official communications of the ministry of health. as explained in table 2.1 and appendix (section a.2 additional data sources were used for choosing θ, α and γ d . in table 3 .1 we show estimated values of β i , where i = 1, 2, 3 correspond to the time intervals when different measures were in place, and the r for the third time interval. given the social distancing measures in place early april 2020, as well as the quarantine levels, the reproductive number was below 1, independently of the value of κ, which relates to testing effectiveness. the figure 3.1 shows the fit of the models assuming different levels of κ. good fit is found for all three models although predictions start to differ in the middle-term prognosis. we proceed with predictions assuming that the restrictions are continued, i.e.keeping β = β 3 (note that the estimated β 3 is different for each κ). we calculate the epidemic duration (t max ), the peak number of infected (i max d , i max u ) and the final size of the epidemic (r d (t max ), r u (t max )). in order to show the influence of quarantine we compare the situation with quarantine, keeping the same θ, α, to the situation without quarantine, setting αθ = 0. the results of the development of the epidemic during the first 120 days are shown on for κ = 0.2 the difference between the scenarios with and without quarantine is visible but not striking. however for κ = 0.5 and κ = 0.8 a bifurcation in the number of new cases occurs around t = 40 leading to huge difference in the total time of epidemic and total number of cases. these values are summarized in the table 3.2. we note that given the epidemic state in the first half of april 2020 for all values of κ the model predicts epidemic extinction both with quarantine and without quarantine. however, since the epidemic is very near to the endemic state, the predicted duration is very long, especially if no quarantine is applied. using the formula (2.5) we can compute critical values β * . in table 3 .3 we show the values of β * (κ, 0.006, 75) and for convenience recall also estimated values of β 3 and r, listed already in table 3 .1. moreover we compute β * (κ, 0, 0), i.e. without quarantine and show values of r for our estimated values of β 3 and the same γ d , γ u but without quarantine. comparing the estimated values of β 3 (table 3. 3) for all cases of κ are only slightly below β * . eliminating the quarantine, for the estimated values of β 3 , we have different situations depending on the actual value of κ. in case κ = 0.2, so assuming that currently only 20% of infections are diagnosed, the low values of r are due to low β 3 rather than the effect of quarantine (controlling epidemic by social contact restrictions). in effect even if we remove the quarantine we have still r < 1, but very close to 1. on the other hand if κ = 0.5 or κ = 0.8 we estimate higher β 3 , which corresponds to the situation of controlling the epidemic by extensive testing and quarantine. for these cases, if we remove the quarantine, we end up with r > 1. the quantity β 3 − θαγ d represents effective transmission rate due to diagnosed cases. in particular it shows by how much the transmission could be reduced by improved contact tracing (θα) and faster diagnosis (γ d ). these results confirm that the higher is the ratio of undiagnosed infections, the weaker is influence of quarantine. in the next section we verify these results numerically. our second goal is to simulate loosening of restrictions. in particular we want to verify numerically the critical thresholds β * listed in table 3.3. for this purpose we assume that at t = 60 we change β. for each value of κ we consider 3 scenarios: (a) current level quarantine: i.e. quarantine parameters θ = 0.006, α = 75 are maintained; (b) no quarantine is applied starting from t = 60; (c) the maximal admissible quarantine is applied, meaning that θ max = β αγ d (see (2.2) ). in this case α = 75. as long as the limit k max is not reached there is no difference whether we increase α or θ, the decisive parameter is αθ. increasing α would lead to reaching k = k max earlier and hence worse outcomes. the results confirm that around β * a rapid increase in the total number of infected occurs, coinciding with the peak total epidemic duration. thus the numerical computations confirm that the critical β * calculated for the linear approximation in the section 2.2 are adequate, with a small bias towards lower values. the case κ = 0.2 shows that the influence of quarantine is not high, even for the maximal admissible case, when we are able to efficiently isolate all persons infected by every diagnosed. a striking feature in the behaviour of total number of infected are jumps for certain critical value of β observed for κ = 0.5 and κ = 0.8, both in case θ = 0.006 and θ = θ max . the values of r d and r u before and after these qualitative changes are summarized in table 3.4. a closer investigation for these values of β shows that in all 4 cases the jump occurs for the first value of β for which the limit number of quarantined, k max = 50000, is achieved. notice that immediately after passing the threshold the values become very close to those without quarantine. we propose a simple seir-type model (seirq), which includes the effects of testing and contact tracing. the model formulation allows to calculate an interpretable formula for the reproductive number r (2.2). as typical for this class of models, r depends on transmission parameters β. increasing β corresponding in e.g. to higher frequency of social contacts increases r. decreasing β, for example in consequence of widespread use of face masks, has the opposite effect. on the other hand γ d reflects the time to diagnosis and the formula indicates that more rapid diagnosis is associated with lower r. in addition, our model offers a clear interpretation of the quarantine effect. the transmission rate due to diagnosed cases, β d , is decreased by the factor θαγ d indicating that both the number of quarantined per diagnosed individual (α) and proper targeting of the quarantine (the infection rate among the quarantined θ) equally contribute to this factor. also the parameter related to testing: the delay in diagnosis, γ −1 d , plays similar role. this quantifies the potential of a wide range of interventions to improve testing and contact tracing, as outlined in e.g. in ecdc recommendations [31] . in particular, as the number of people put in quarantine per each case and the infection rate among the quarantined impact r in similar fashion, our results support the recommendations to focus on the high risk contacts when the resources do not allow to follow all contacts. our model takes into consideration only the effective contact tracing, i.e. the situation when the infected contacts are identified and put in quarantine before they become infectious. people who are identified later would be modelled as passing through one of the i states to the r states. this means that the number of quarantined in our model can be also increased by faster contact tracing. the timely identification of contacts may be a significant challenge in the quarantine approach given that the incubation time can be as short as 2 days in 25% of cases [3] . as mentioned by other authors [28] , the delays in manual contact tracing are usually at least 3 days and under such circumstances the contact tracing and quarantine alone may be insufficient to control the epidemic. this could be improved with digital contact tracing. notably, mixed contact tracing strategies implemented in south korea indeed helped to control the epidemic at the early stages [32] . the use of "smart contact tracing" with mobile phone location data and administrative databases were also key to rapid identification and self-quarantine of contacts in taiwan [33] and implementation of such strategy helped singapore to control the epidemic without major disruptions of social activities [34] . we note that the quarantine effect relates only to transmission due to diagnosed cases. as expected, in order to control the epidemic the transmission due to undiagnosed cases has to be negligible. this can be controlled by general measures such as lockdown, which universally decrease the frequency of social contacts and are therefore likely to reduce β u . in our model the part of r representing transmission due to undiagnosed cases is scaled by (1 − κ) , the parameter relating to the efficiency of the testing system. again, the examples of singapore as well as the italian village of vo'euganeo show that the widespread testing complementing the efficient contact tracing was essential to suppress epidemic. testing unrelated to epidemiological links decreases (1 − κ) factor, thus making the factors impacting transmission due to diagnosed cases, such as quarantine, more powerful to decrease r. further, our model allows to study the effect of the situation, in which the contact tracing capacities are exceeded. in this situation the epidemic is likely to quickly develop to the levels observed without quarantine. it is therefore quite crucial to implement the aggressive contact tracing system, when the epidemic is still at low levels and it is possible to bring the epidemic to suppression phase. we demonstrate the high impact of contact tracing and quarantine on the observed numbers of cases in poland. this effect was coupled with substantial reduction in the transmission parameter β resulting from social contact restrictions. depending on the scenario, β decreased by 76% to 84%, bringing r below 1. the estimated effect of the quarantine in poland would depend on which of the considered scenarios regarding testing efficiency was the most relevant to our situation. in our model the quarantine is estimated to be the most effective for the scenario in which most of the cases are diagnosed (κ = 0.8). testing strategies that comprise testing of all individuals with symptoms of respiratory illness could theoretically identify up to 82% of infected, assuming they would all present to medical care. this could be coupled with random screening of high risk individuals, in e.g. health care workers, or -in case of high incidence -even random screening of entire community to achieve the κ of the order of 0.8. the polish clinical recommendations specifically mention only testing all individuals with severe infections [35] . in addition testing is provided to health care workers. the severe course corresponds to approximately 18% of all infections [3] . therefore, the κ = 0.8 scenario is unlikely to be realistic in poland. we believe that the plausible current κ in our country lies between 0.2 and 0.5. for these scenarios the model shows that the control of the epidemic is largely achieved through suppression of β. in case of relaxation of social contact restrictions, the efforts should be focused on increasing the level of testing in order to decrease the proportion of undiagnosed cases as well as maintaining or increasing the effectiveness of quarantine. for smaller κ, even substantially increasing the effectiveness of quarantine does not allow to go back to the level of social contacts from before the epidemic (β 1 ). finally, the contact tracing effort was manageable in poland due to relatively small number of cases. should the case load increase substantially longer delays in contact tracing would occur, which can substantially decrease the effects of quarantine [27, 28] . limitations and future directions of research. we do not consider the likely reduced transmission from undiagnosed cases who are more likely to be asymptomatic or paucisymtopmatic cases (β u < β d ). the reduction factor for infectiousness of asymptomatic is still under investigation. one study found a 60-fold lower viral loads in asymptomatic cases [29] , but another estimated the transmissibility reduction by 50% [6] . moreover, we did not have sufficient data to include this additional parameter. we calibrated our model only to diagnosed cases, which were officially available. calibration to mortality data is another approach successfully implemented in e.g. [25] that potentially removes bias due to different testing policies. as there were relatively fewer fatalities in poland and little data on clinical progression we decided on simplified model without explicit modelling of the outcomes. furthermore, we did not consider the sub-optimal adherence to quarantine. it is likely that some individuals would not fully comply to strict quarantine rules. however, only anecdotal evidence for such phenomenon is available at this time. in our model it would decrease the effective αθ, which was chosen to fit to observed number of people put in quarantine. finally, the analysis of r is suitable for small size of epidemic, when s ≈ n . for other cases the results are still useful, but the approximation may be biased, as we have shown for β * . due to little available data and policy changes we did not have sufficient data to determine which κ scenario is the most appropriate. in conclusion we present a simple model, which allows to understand the effects of testing, contact tracing and quarantining of the contacts. we apply the model to the data in poland and we show that despite a substantial impact of contact tracing and quarantine, it is unlikely that the control of the epidemic could be achieved without any reduction of social contacts. [ a.1 optimization algorithm. in order to fit the values β 1 , β 2 , β 3 we use a standard gradient descent algorithm. namely, we define error function as where {r d (κ, t)} 54 t=1 is the vector of computed values of r d and {data(t)} 54 t=1 the vector of data (cumulative number of confirmed cases). at each step we approximate the gradient of the error function with respect to β 1 , β 2 , β 3 by differential quotients and move in the direction opposite to the gradient. the algorithm reveals a good performance provided we start sufficiently close to the minimum, which is not difficult to ensure in our case. it remains to choose the initial data. a closer look on results of simulations shows that the choice of initial data mostly influence the fitting in the beginning of period under consideration and hence the value of β 1 , while for analysis of future scenarios β 3 is the most important. taking all this into account we do not struggle for sharp optimization of data fitting with respect to initial data and restrict to the following heuristic choice. it is natural to assume i u (0) = 1−κ κ i d (0). concerning the choice of e(0) we assume it in a form e(0) = m(i d (0) + i u (0)). we set initial values i d (0) ∈ {10, 20, 30} and for each value we set i u (0) according to the above formula and three values of e(0) corresponding to m ∈ {2, 3, 4}. for each of these 9 combinations we run the optimization algorithm looking for the best fit of β i , i = 1 . . . 3. we have repeated this approach for κ ∈ {0.2, 0.5, 0.8}. it turns in that for all values of κ the best fit was obtained for i d (0) = 20 and m = 2. more careful analysis around i d (0) = 20 did not improve the quality of fitting, therefore: a.2 choice of fixed parameters 1. the parameter σ describes the rate of transition from non-infectious incubation state e into the infectious states i d or i u . the median incubation time from exposure till the onset of symptoms was estimated at 4 to 5 days [3, 4, 5] . however, there exists evidence that typically infectivity preceeds symptoms, by 1 to 3 days [7, 9, 10] . a modelling study identified the rate of transition between the non-infectious and infectious states at 1 3.69 [6] , which corresponds to an average time lag of 3.69 days untill the case becomes infectious. 2. the parameter γ u represents the period of infectivity during the natural course of disease. we discuss the period of infectivity, especially as applied to mild cases. the median duration of viral shedding was estimated among 113 chinese hospitalized patients. overall it was 17 days, but it was shorter among cases with milder clinical course [16] . a study among 23 patients in hong kong confirmed viral shedding longer than 20 days among a third of patients, although the peak level of shedding was noted during the first week of infection [17] . in the mission report from china who reports viral shedding in mild and moderate cases to last 7 -12 days from symptom onset. among younger and asymptomatic or mild cases the shedding may be shorter: in a study among 24 initially asymptomatic youngsters the median duration was 9.5 days [11] . 3. the value of κ generally depends on the testing policy. however, recommended testing policies often rely on the presence of respiratory symptoms. this is also the case in poland. it was observed that some infected people never develop symptoms, although the precised rate of such truly asymptomatic infections is still under investigation. some studies may be biased by a too short follow-up time. a small study among residents of a long-term care skilled nursing facility found that even though more than half of individuals with confirmed infection were asymptomatic at the time of test, majority of them subsequently developed symptoms. the proportion of people who remain asymptomatic may be higher among younger individuals [hu] . a study among japanese nationals repatriated from wuhan suggests the proportion of asymptomatic infections is about 30% [13] . an analysis among the passengers of diamond princess ship, where a covid-19 outbreak occurred, taking into account this delayed onset of symptoms estimated the proportion of asymptomatic infections to be about 18%, even though almost 50% were asymptomatic on initial test. in addition, large scale screening implemented in italian village vo'euganeo indicated that 50% to 75% of infected individuals did not report symptoms [14] . similarly, in population screening in iceland 50% were asymptomatic at the time of screening [15] . it may be stipulated that some of the people diagnosed through screening developed symptoms latter, consistently with the findings from the diamond princess study. on the other hand a sizable proportion of infected people, especially at younger ages, experience only mild symptoms, for which they may not seek medical attention. in the study of li [6] , the proportion of undocumented cases was estimated as 86%. 4. the parameter γ d was estimated basing on a sample of case-based data available in routine surveillance, by fitting gamma distribution to the time from onset to diagnosis, for cases who were not in quarantine before diagnosis. time from onset to diagnosis was estimated based on surveillance data available in the epidemiological reports registration system for covid-19, as of 28.04.2020.the system collects epidemiological data on cases diagnosed in poland and is operated by local public health departments. all cases eventually are entered into the database. however, substantial reporting delays are noted. there were altogether 4976 cases registered in the system, including 1995 (40.1%), who did not have symptoms at the time of diagnosis. plausible onset date and plausible diagnosis date were available for 2884 cases ( 96.7% of 2981 cases that were not asymptomatic) gamma distribution was fitted by maximum likelihood to cases who were not diagnosed in quarantine. the observed and fitted distributions are shown below (figure a.1). we next fitted gamma-regression model with week of diagnosis as an explanatory variable. we found no significant trend in time. we therefore adopted the average time from onset to diagnosis to be 4.6 days, and taking into account the probability of asymptomatic spread we assumed the parameter γ d to be 1/5.5. 5. next we base θ on available data. we calculate prevalence of infection among the quarantined individuals, according to data published by the chief sanitary inspectorate on the number of cases diagnosed among quarantined people and the total number of quarantined. we used a series of data 8.04 -20.04 to estimate a likely value of θ. we chose this time period due to data availability. data are shown on the figure below. during this time period there was an increasing trend in the proportion of diagnosed from 0.5% to 0.8% a.2. we presume that this parameter could change with changing procedures of contact tracing and testing. however, since no detailed data were available, for the modelling purposes we chose a simplifying assumption that θ is stable (i.e. we always take a similar group of contacts under quarantine) selecting an average value of 0.6%. this proportion could be also viewed as attack rate among the contacts of cases. the proportion in poland is in line with what was observed in korea, where an estimated attack rate was 0.55% [32] , although household attack rate was higher (> 19%) in other studies [20] . 6. furthermore we fix the parameter α. here we make another simplification assuming this parameter to be constant. the main difficulty is a lack of precise data concerning the number of newly quarantined people per day, distinguishing between reasons of quarantine (travel related or contact tracing relate). at the beginning of epidemic in poland the average amount of quarantined following one diagnosed case was definitely higher. moreover, people coming back from abroad were subject to obligatory quarantine starting from march 16 and constituted a considerable part of quarantined in the second half of march and beginning of april. in particular, around 54 000 polish citizens staying abroad came back within a special program of charter flights operated by polish airlines which ended on april 5. we can assume that after this date the ratio of people coming from abroad among all people subject to quarantine was negligible. as our model does not take migration into account, we have to take into account only quarantine from contact. for above reasons, for fitting α we restrict our analysis only to a period of 2 weeks of april. assuming already θ = 0.006 we then choose α minimizing the square error between the number of quarantined from the data and computed k(t). this way we obtain α = 75. taking these into consideration we set the values of parameters collected in table 2.1. bootstrap. to estimate confidence intervals we use a method of parametric bootstrap. we generate m = 200 sequences of perturbed data assuming that for each time t ∈ {1, 54} the increment of r (i.e. daily number of new diagnoses) is a random number from poisson distribution with mean value equal to increment of observed data. model parameters are also perturbed, see below. for each series of perturbed data we estimate the values of β i and take estimated confidence intervals as appropriate quantiles of obtained sets. in order to estimate confidence intervals for r d (t), shown on panel a of figure 3 .1, we proceed as follows. for each sequence of perturbed data we compute fitted r d (t). this way we obtain a set of curves {r distribution of parameters. following other authors [30] as well as experimental data, for the uncertainty analysis we used the following distributions of the parameters. 1. 1/γ d ∼ gamma(a 1 , b 1 ), where the shape parameter, a 1 = 1.05 and scale parameter b 1 = 5.23 2. 1/γ u ∼ gamma(a 2 , b 2 ), where the shape parameter, a 2 = 2 and scale parameter b 2 = 5 3. 1/σ ∼ gamma(a 3 , b 3 ), where the shape parameter, a 2 = 2 and scale parameter b 2 = 1.75 4. α ∼ p oisson(α 0 ), where we assume constant α 0 = 75; 5. θ -is not sampled for the uncertainty analysis. as the results depend on the quantity αθ, we rely on the distribution of α. we take n = 200 (approximate average of daily number of diagnosed cases from the data). we approximate the mean value of n samples from gamma distribution using central limit theorem. namely, we generate based on the classical approach to epidemiological models we address the basic question concerning the propagation of the disease. namely, how many persons are infected by one infectious individual, a quantity which is usually called reproductive number, r. in order to compute this quantity we use the approach from [23] . we look at the system assuming s ∼ n and e, i d , i u are close to zero, then we consider the following linearizatioṅ then one deduces (see [23] ) that if we define r = max{eigenvalues of t σ −1 } then the system is stable for r < 1 and it is unstable for r > 1. stability of system (a.5) means that the whole vector (e, i d , i u ) is going to zero, it follows that the main system (2.1) also tends to the zero solution for (e, i d , i u ). instability implies that for "almost all" small data, the vector (e, i d , i u ) is growing in time (exponentially fast), causing the nonlinear system also evolves far away from the trivial state, i.e. e, i d , i u rapidly grow. by (a.4) we have hence the stability of our system is determined by the following factor: to make a final comment, let us note that in case of spread of pandemia, as r d , r u grow, the above analysis become less reliable. recall that β is normalized by n , so as s/n is not close to one and the analysis of stability becomes more complex. this behavior is illustrated by figures 3.3, 3.4 and 3.5, where we observe deviation from predictions based on (2.2), covid-19) in the eu/eea and the uk -ninth update an investigation of transmission control measures during the first 50 days of the covid-19 epidemic in china clinical characteristics of coronavirus disease 2019 in china. the new england journal of medicine early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia. the new england journal of medicine the incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application. annals of internal medicine substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov2) presymptomatic transmission of sars-cov-2 -singapore public health -seattle & king county; cdc covid-19 investigation team. asymptomatic and presymptomatic sars-cov-2 infections in residents of a long-term care skilled nursing facility potential presymptomatic transmission of sars-cov-2 rapid asymptomatic transmission of covid-19 during the incubation period demonstrating strong infectivity in a cluster of youngsters aged 16-23 years outside wuhan and characteristics of young patients with covid-19: a prospective contact-tracing study clinical characteristics of 24 asymptomatic infections with covid-19 screened among close contacts in nanjing estimating the asymptomatic proportion of coronavirus disease 2019 (covid-19) cases on board the diamond princess cruise ship estimation of the asymptomatic ratio of novel coronavirus infections (covid-19) covid-19: identifying and isolating asymptomatic people helped eliminate virus in italian village iceland lab's testing suggests 50% of coronavirus cases have no symptoms factors associated with prolonged viral rna shedding in patients with covid-19 temporal profiles of viral load in posterior oropharyngeal saliva samples and serum antibody responses during infection by sars-cov-2: an observational cohort study geneva:who; 2020 epidemiology and case management team contact investigations of the first 30 cases in the republic of korea household secondary attack rate of covid-19 and associated determinants medrxiv 2020.04.11 the construction of next-generation matrices for compartmental epidemic models public health interventions to mitigate early spread of sars-cov-2 in poland estimating the number of infections and the impact of non-pharmaceutical interventions on covid-19 in 11 european countries.imperial college london modelling the covid-19 epidemic and implementation of population-wide interventions in italy feasibility of controlling covid-19 outbreaks by isolation of cases and contacts quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing viral dynamics in mild and severe cases of covid-19 centre for mathematical modelling of infectious diseases covid-19 working group. early dynamics of transmission and control of covid-19: a mathematical modelling study contact tracing for covid-19: current evidence, options for scale-up and an assessment of resources needed epidemiology and case management team contact investigations of the first 30 cases in the republic of korea. osong public health res perspect containing covid-19 among 627,386 persons in contact with the diamond princess cruise ship passengers who disembarked in taiwan: big data analytics evaluation of the effectiveness of surveillance and containment measures for the first 100 patients with covid-19 in singapore acknowledgments. this work was partially supported by the polish national science centre's grant no2018/30/m/st1/00340 (harmonia). key: cord-221717-h1h2vd3r authors: scabini, leonardo f. s.; ribas, lucas c.; neiva, mariane b.; junior, altamir g. b.; farf'an, alex j. f.; bruno, odemir m. title: social interaction layers in complex networks for the dynamical epidemic modeling of covid-19 in brazil date: 2020-05-16 journal: nan doi: nan sha: doc_id: 221717 cord_uid: h1h2vd3r we are currently living in a state of uncertainty due to the pandemic caused by the sars-cov-2 virus. there are several factors involved in the epidemic spreading such as the individual characteristics of each city/country. the true shape of the epidemic dynamics is a large, complex system such as most of the social systems. in this context, complex networks are a great candidate to analyze these systems due to their ability to tackle structural and dynamical properties. therefore this study presents a new approach to model the covid-19 epidemic using a multi-layer complex network, where nodes represent people, edges are social contacts, and layers represent different social activities. the model improves the traditional sir and it is applied to study the brazilian epidemic by analyzing possible future actions and their consequences. the network is characterized using statistics of infection, death, and hospitalization time. to simulate isolation, social distancing, or precautionary measures we remove layers and/or reduce the intensity of social contacts. results show that even taking various optimistic assumptions, the current isolation levels in brazil still may lead to a critical scenario for the healthcare system and a considerable death toll (average of 149,000). if all activities return to normal, the epidemic growth may suffer a steep increase, and the demand for icu beds may surpass 3 times the country's capacity. this would surely lead to a catastrophic scenario, as our estimation reaches an average of 212,000 deaths even considering that all cases are effectively treated. the increase of isolation (up to a lockdown) shows to be the best option to keep the situation under the healthcare system capacity, aside from ensuring a faster decrease of new case occurrences (months of difference), and a significantly smaller death toll (average of 87,000). although we have experienced several pandemics throughout history, covid-19 is the first major pandemic in the modern era. the last critical global epidemic occurred in 1918 and became known as the spanish flu. but, in 1918, the reality was quite different. scientific and medical knowledge was much more limited, making it difficult to fight the disease. furthermore, the world was not globalized, the means of transport were not as agile as the current ones and the population was much smaller. the 21st century is marked by globalization and an intricate and intense social network, which connects in one way or another to everyone on the planet. the latter fact increases the danger that a local epidemic disease will rapidly evolve into a pandemic like what happened in wuhan, china, and now is all over the world. the form of propagation and contagion of the sars-cov-2 virus occurs by direct contact between individuals, through secretions, saliva, and especially by droplets expelled during breathing, speeching, coughing, or sneezing. the virus also spreads by indirect contact, when such secretions reach surfaces, food, and objects [41] . besides, infected people take a few days to manifest symptoms, which can be severe or as mild as a simple cold. there is even a large proportion of infected people who remain asymptomatic [37] . this makes it practically impossible to quickly identify the infected and apply effective measures to limit the spread of the disease. also, sars-cov-2 was discovered in december 2019, which makes it very recently in the face of the current epidemic. little is known about the covid-19 disease, which appears to be highly lethal, with no drugs to prevent or treat. the concern is greater since direct (individual -individual) and indirect (individual -objects -individual) social relations are the means of spreading the disease. thus, the social interaction structure is the key to create strategies and guide health organizations and governments to take appropriate actions to combat the disease. one of the main concerns is overloading the health system. the first case in brazil was confirmed on february 26, a 61-year-old man who traveled to the lombardy region in northern italy. now, in the middle of may, there are more than 200,000 cases and 14,000 deaths in all states of brazil [30] . the concern is even worse due to the country's social inequality, over 80% of the population relies solely on the public health system and this distribution is not uniform. according to [11] , there are only 9 hospital beds per 100,000 people in the north region while southeast accounts for 21 hospital beds. the treatment of severe cases requires the use of respirators/ventilation in intensive care units (icu), and if simultaneous infections occur there will be no beds to meet the demand and a possibly large number of victims. thus, it is urgent to develop models and analyses to try to predict the evolution of the virus. also, as noted in figure 1 , brazil is running towards being the next epicenter of the pandemic. it has already exceeded the number of cases in important countries such as germany, china, japan, italy, iran, south korea, and france (the rates consider the population size of each country and are on a logarithmic scale). [24] ). it is possible to notice that brazil is surpassing countries such as italy, south korea, japan, and china, and it is reaching the relative number of cases in the united kingdom and france. as of the date of this study, the united states is the epicenter of the pandemic. since covid-19 presents a unique and unprecedented situation, this work proposes a specific model for the current pandemic. based on the classic epidemic model sir, also extended to sid [35] , siasd [9] and siqr [14] , we propose a more realistic model to better represent the effects of the covid-19 disease by adding more infection states. the proposed approach also considers social structures and demographic data for complex network modeling. each individual is represented as a node and edges represent social interaction between them. the multi-layer structure is implemented by different edges representing specific social activities: home, work, transports, schools, religious activities, and random contacts. the probability of contagion is composed of a dynamic term, which depends on the circumstances of the social activity considered, and a global scaling factor β for controlling characteristics such as isolation, preventive measures, and social distancing. the proposed model can be used to analyze any society given sufficient demographic data, such as medium/big cities, countries, or regions. here we analyze in depth the brazilian data. the sir model is applied through the network using an agent model, and each iteration of the system is simulated using the 24-hour pattern, allowing us to understand the dynamics of the disease throughout the days. the results show the importance of social distancing recommendations to flatten the curve of infected people over time. this is currently maybe the only way to avoid a collapse of the health system in the country. the paper is divided as follows: section 2 presents important concepts about complex networks, the sir model and its applications. section 3 explains our proposed approach and sections 4 and 5 presents the results, discussion, and conclusions of the work. created from a mixture of graph theory, physics, and statistics, complex networks (cn) are capable to analyze not only the elements themselves but also their environment to find patterns and obtain information about the dynamics of a system. as most of the natural structures are composed of connected elements, graphs are suitable to analyze most of the real-world phenomena. over the past two decades researchers have been showing that many real networks do not present a random structure, and its emergent patterns can be used to understand and characterize a model [8, 39] . complex network analysis has then been applied to sociology, physics, nanotechnology, neuroscience, biology, among other areas [12, 13] . to start with a formal definition, a graph g is a set {v,e} where v is composed by n vertices (also known as nodes or elements) {v i , v n } and e is the set e(v i , v j ) of edges (or connections) among its elements. edges represents the relationships between two elements and its value can also represent the strength or weight of a connection if usually, applications with complex networks consist of two main steps: i) transform the real structure into a complex network, and ii) analyze the model and extract its features or understand its dynamics. one natural phenomenon that has a straight forward connection to a complex network in society. people are connected due to several aspects such as members of a family, religious groups, co-workers, members of the same school, or faculty, among other social relationships. therefore cns have been widely employed for social network analysis [38] . extended from social interactions, the epidemic spread has also been studied by researchers in the last decades. in this context, one of the best known and widely used epidemic models in infectious diseases is the susceptible-infectedrecovered (sir) model, which is composed of three categories of individuals [4, 7] • susceptible: the ones who are not infected but could change its status to a state to infected if in contact with a sick person combined with a probability β of contagion • infected: the ones that have the disease • recovered: usually after some time, a person recovers from the illness and it is not able to be infected again due to the immunity process (in this case, this is an assumption of the process). the recovery rate of infected people is aligned with a probability of γ also, the model can be described as where s, i and r represents the ratio of susceptible, infected and recovered people in the population, respectively. usually, the problem is solved with differential equations, however, agent-based techniques in networks can represent the nature of the spread of viral diseases in a more complex scenario. if a network is fully connected, meaning that e(v i , v j ) = {1, ∀ i,j 0 < i, j <= n }, equation 2 fits the structure perfectly. however, in the real world, not everyone is connected and people only contract the disease if in contact with an infected individual or object. this is why a complex network approximates the dynamics of real viruses and can help us to understand the disease behavior. there are various approaches to represent people and society as networks, named social network analysis. small world networks [32] can be used as a good approximation of the social connections. in 2000, moore [32] emphasized that the use of small-world networks, where the distance among two elements is usually small in comparison to the size of the population, showed a faster spread of the viral disease than classical diffusion methods. the approximation of real social phenomena was first explained by milgram [29] in [29] , the sociologist is the author of the well-known idea that there are up to six people separating any two individuals in the world, which reinforces the importance of analyzing the epidemic spread from a graph view. in [33] , the authors used small-world networks to simulate a sir model, however, they considered that every contact with an infected person resulted in contamination, which is not realistic. therefore, other researchers improved the model over the years, adding new constraints to approximate the simulation to real scenarios [15] . the sir model on networks works as follows: each node represents a person and, the elements are connected according to some criteria and the epidemic propagation happens through an agent-based approach. it starts from a random node, and for each time step nodes with the susceptible state can contract the disease from a linked infected node with a predefined probability. the same idea occurs with the recovered category. after a certain period, a node can recover or can be removed from the system (case of death) according to a certain probability. at the end of the evolution of a sir model applied to a network, the number of nodes in each sir category (susceptible, infected and recovered) can be calculated for each unit of time evaluated and then compare these data with real information, for example, the hospital capabilities of the health system. also, the probability of infection and recovery can be adjusted over time considering social distancing, hygiene, and health conditions. the proposed model extends the sir model to a more realistic scenario to achieve a better correlation to the covid-19 disease, since the model was created specifically for the disease, we named the model as complexvid-19. our strategy is based on a multi-layer network to represent the brazilian demography and its different characteristics of social relationships. each layer is composed of a set of groups representing how people interact in a given social context. in the network, a node represents a person and the edges are the social relationships between persons, and they are also the means through which the disease can be transmitted. the virus spreads from an infected node to neighboring nodes at each iteration step (1 step = 1 day), according to a given infection probability. first, we describe how the layers are built based on social data from brazil. to define the different social relations, the first information needed is the age distribution so that groups such as schools and work can be separated. we consider the brazilian age distribution in relation to the total population in 2019 [20] , details are given on table 1 . this distribution is used to define an age group for each node, which is then used to determine its social activities through the creation of edges on different layers. in this approach, each network-layer represents a kind of social relationship or activity that influences the transmission of the covid-19. in this way, it is possible to evaluate and understand what is the impact of each social activity in the epidemic propagation. basically, in this work, a network layer is represented by a set of edges connecting some nodes. the following social activities are considered, composing 6 different layers: • home: in this layer, all people that live in the same residence are connected. • work: connects people that work in the same environment/company. • transport: this layer represents people that eventually take the same vehicle at public transports. • school: represents the social contact of students that belong to the same school class. • religious activities: connects people of the same group of some religious activity. • random: this layer represents activities of smaller intensity, such as indirect contact (through objects/surfaces). the first layer represents home interactions and is composed of a set of groups with varying size which are fully connected internally. these groups have no external connections, i.e. the network starts with disconnected components representing each family. to create each group, we consider the brazilian family size distribution for 2010 [19] , the year with more detailed information on family sizes from 1 up to 14 members. we consider the probability of a family having sizes from 1 to 10, therefore the probability of a family having 10 persons is the sum of the higher sizes, the details of this distribution are given in table 1 . the first layer is then created following the family size distribution and ensuring that each family has at least 1 adult. figure 2 (a) shows the structure of such a layer built for a population of n = 100. a large fraction of the population in any country needs to work or practice some kind of economic activity, which also means interacting with other people. thus, work represents one of the most important factors of social relations, which is also very important in an epidemic scenario. to represent the work activity we propose a generic layer to connect people with ages from 18 to 59 years, i.e. 60% of the total population in the case of brazil. there is a wide variety of jobs and companies, therefore it is not trivial to create a connection rule that precisely reflects the real world. here, we consider an average scenario with random groups of sizes around [5, 30] , uniformly distributed, and internally connected (such as the "home" layer). an example of this layer is shown on figure 2 (b), using n = 100. although the nodes of a group are fully connected, the transmission of the virus depends directly on the edge weights, which we discuss in-depth on section 3.1.1. collective transports are essential in most cities, however, it is one of the most crowded environments and plays an important role in an epidemic scenario also due to the possibility of geographical spread, as vehicles are constantly moving around. the third layer we propose represents this kind of transports, such as public transports, and includes people that do not possess or use a personal vehicle. in brazil the number of people using public transport depends on the size of the city, with 64.98% in the capitals and 35.89% in other cities [23] , with an average use of around 1.2 hours a day 1 . here we consider the average of the population between the two cases (50%), randomly sampled, to participate in the "transports" layer. random groups are created with sizes between [10, 40] , uniformly sampled, and the nodes within each group are fully connected. this variation of sizes is considered to represent cases such as low and high commuting times, and also the differences between vehicle sizes. other factors such as agglomeration and contact intensity are discussed in section 3.1.1. this layer is illustrated on figure 2 (c). schools are another environment of great risk for epidemic propagation. the proposed layer considers the characteristics of schools from primary to high school and how children interact. we consider that all persons from 0 to 17 years (24% of the brazilian population) participate in this layer, and the size of the groups, which represents different school classes, varies uniformly between [16, 30] [21]. this layer is illustrated on figure 2 (d). brazil is a very religious country, in which by 2010 only around 16.2% of the population claimed not to belong to any religion [18] . 64.6% claimed to be catholic and 22.2% to be protestant, summing up to 86.8% of the total population. here we consider that nearly half of these people (40% of the total population) actively participate in religious activities (weekly). the distribution of religious temple sizes is defined as a pareto distribution in the interval [10, 100] . taking into account that wage distribution follows the pareto distribution approximately, we model real estate predominance according to their capacity. the assumption here is that building costs (for churches, offices, homes, etc.) have a linear relationship to their internal capacity, and thus any given capacity has a power-law relationship with the number of such buildings within a region. we consider a random layer to represent all kinds of contacts not related to the specific previous social layers. this includes small direct contacts (person-to-person) and indirect contacts (individual -objects -individual) that may happen throughout the week, such as random friend/neighbor meetings, shopping, and other activities that involve surface contacts. for that 5n new random edges are created, that can connect any node. on the one hand, this yields an average of 5 random connections to each node, which can randomly connect any other node. on the other hand, the impact of this layer on the epidemic is smaller than the others, as it represents rapid contacts in comparison to the other activities described, thus its infection probability is smaller. in the following section we discuss the details concerning this aspect, deriving from the edge weights of each layer. in figure 2 (f) an example of this layer is shown. the overall structure of social interactions in our model can be compared to the statistical analysis in [16] , however here we introduce a more detailed model of social contacts with specific layers and connection patterns to better fit the particularities of a given country or city. unlike the traditional sir model, which consists of a single β term to describe the probability of infection, here we propose a dynamic strategy to better represent the real world and the new covid-19 disease. the idea is to incorporate important characteristics in the context of epidemic propagation according to each layer. firstly, to a given layer a fixed probability term is calculated to represent its characteristic of social interaction. for this, we considered 3 local terms: the contact time per week, the average number of people close to each other (agglomeration level), and the total number of people involved in the respective activity. considering two nodes v x and v y , connected at group j of layer i, its edge weight is then defined by where t i represents the average weekly contact time on layer i, k i is the agglomeration level (average number of nearby people) and n ij represents the size of the group j in which the nodes participates on layer i. the first fraction represents the contact time normalized by the total time of the week (24 * 7 = 168), and the second fraction represents the proportion among the local people closest to the total number of people on that activity group. the first part of the infection probability equation is multiplied by a β term, which scales the original probability. the β term is then the only parameter to tune the infection rates for the entire network, and the other properties are specific for the studied society, based on its population characteristics and the nature of the activities (layers). table 2 shows these specific properties that we considered for the brazilian population, and how the infection probabilities are calculated for each layer. in the table, we have the following information: who or how many people are part of the activity represented by a layer (column "who", discussed in the previous section); contact time according to activity (column "time of contact"); the average number of people close to each other in each activity (column "nearest", represents the agglomeration level); the number of connections between people (column "group size"); the probability of infection (column "probability"). • susceptible: traditional case, it means that a person can be infected at any time. this is the initial state of every node. • infected -asymptomatic: people who do not show any symptoms (30% of the total cases of infection) and remain contagious for up to 18 days (they may recover after 8 days). this is the most dangerous case for the epidemic spreading because the person is not aware of its infection. • infected -mild: 55% of the cases, present mild and moderated symptoms with no need for hospitalization, remain contagious for up to 20 days, and may recover after 10 days of infection. • infected -severe: 10% of the cases, present strong symptoms, and need hospitalization, remain contagious for up to 25 days. has a death rate of 15% and may recover after 20 days. • infected -critical: present worst symptoms and remain contagious for up to 25 days, need icu and ventilation, have a death rate of 50% and may recover after 21 days. • recovered: people who went through one of the infection cases and overcame the disease, ceasing to contaminate and supposedly becoming immune. these nodes no longer interact with other nodes anymore and are therefore removed from the network. • dead: people who went through severe or critical cases and eventually died. these nodes are also removed from the network. estimates for the proportion of asymptomatic cases vary from 18% (95% confidence, [15.5, 20 .2%]) [31] to 34% (95% confidence, [8.3, 58 .3%]) [17] . considering the confidence intervals, here we roughly approximate it to an average of 30% of the total number of infected cases. however, it is very difficult to study asymptomatic cases due to several reasons, such as the lack of available tests and the difficulty in identifying potential cases, which would include every person who had contact with known symptomatic cases. some studies indicate that asymptomatic cases may remain contagious for up to 25 days, with an incubation period of 19 days [6] , but the viral load may be smaller at the end of the infection. here we take an optimistic approach considering that they may recover (become immune and cease to contaminate) uniformly after 8 days of infection, up to around 18 days. as for the recovered nodes, we are considering that people become immune or at least acquire a long-term resistance to the virus, up to a maximum of 300 days (limit of our simulations). however, this should be taken cautiously as these properties are not yet fully understood [26] . the infection grows through the contact (edges) between infected and susceptible nodes, and the probability of being infected is the edge weight. if infection occurs, then one of the 4 infection cases are chosen based on the probability described above (30%, 55%, 10% and 5%). this distribution plays an important role in the structure and dynamics of the network. the node structure of asymptomatic cases does not change during the simulation, except for the time it takes to cease contamination and recover. it means that as these persons are not aware of their contamination, they will remain acting normally on the network (according to the active layers and edge weights). their contagious time varies from 1 to 18 days after infection. concerning the other cases (mild, severe, and critical), we consider the incubation time of the virus, the recovery time, the contagion time, the death rates of each case, and the usual action taken by the infected person or health professionals at hospitals. various works [5, 27, 28] point out that the average incubation period of covid-19 is around 5 days, but some cases may take much less or more time. the official who report [40] states that the average incubation time is around 5 to 6 days, with cases up to 14 days. the results in [27] show that the average shape of the incubation time follows a log-normal distribution (weibull distribution) with an average of 6.4 days and a standard deviation of 2.3 days. in this context, we consider the day when an infected person begins to show symptoms by randomly sampling from this distribution (1000 repetitions), with cases varying from 2 to 14 days. for mild cases, the nodes are isolated at home, maintaining the connections of the first layer, and then only 20% of the cases are diagnosed. considering the ratio of diagnosed cases, patients who are asymptomatic or with mild symptoms of covid-19 may not seek health care, which leads to the underestimation of the burden of covid-19 [25] . moreover, our diagnosis rule is also based on the fact that ongoing tests in brazil are increasing more slowly than in most european countries and the usa (tests are being performed mostly on people that need hospitalization). if a given case is severe or critical, the patient goes to a hospital and is fully isolated, i.e. we remove all of its connections. this is a rather optimistic assumption, considering that these patients still may infect the hospital staff. concerning the time that patients usually stay at hospitalization/icu, the works [10, 44] points to an average of 14 days for all cases. for standard hospitalization, we considered a minimum of 6 days and a maximum of 16 days of stay, and for the icu/ventilation, a minimum of 7 and a maximum of 17 days of stay. the time of each case will depend on the day the symptoms start and the day of recovering/death. figure 3 illustrates all the infected states and mechanisms described here. this configuration results in an overall lethality of 4%. it is important to stress that here we consider a maximum of 25 days of infection time, which is the time frame based on most studies we have seen so far in the literature. we are still at the beginning of the pandemic and a better characterization of the long-term impact is very difficult. nonetheless, the available information allows to represent the most obvious features of the sars-cov-2 virus and to evaluate its main impacts on society. to simulate the reduction or increase of social distancing/quarantine, we remove/include some layers of the network, or change their edge weights. similarly to the approach on [16] to improve home contact when in quarantine, we increase the home layer edge weights by 20% for each removed layer. to balance that we considered a smaller number of hours of contact in the base calculation for the home layer (3 hours a day), also taking into consideration that this layer has full contact between people of the same family. when the home contacts are increased according to our approach of layer removal, the time/intensity of contacts may increase up to its double. for each experiment with the proposed model, we consider the average and standard deviation (error) of 100 random repetitions to extract statistics of infection, death, and hospitalization time. due to the random nature of these networks, it is possible that extreme cases occur within the repetitions, i.e. when the infection starts at a node that is not capable of further propagation, leading the epidemic to end at few iterations. considering the real data we know that this is not the case, at least not for brazil, therefore we manually remove these networks and they are not considered for the average/error calculations. it is important to notice, however, that this rarely happens, in all our experiments we noticed a maximum of 4 networks of this kind. due to time and hardware constraints, our simulation considers 100,000 nodes, and the results need to be scaled up by a factor of 57 to match the brazilian population statistics. this factor was empirically found by approximating the model results in the number of reported cases in brazil. it is important to stress that for better statistics it should be considered the largest possible number of nodes to represent a population, i.e. the ideal case would be n = total country/city population. however, the computational cost of the simulation grows directly proportional to the number of nodes and edges of the network, and considering the critical situation of the moment at hand, 100,000 nodes are our limit to promptly present results of the epidemic dynamics. in the experiments when varying the social distancing, the same network is considered in each iteration, i.e. comparisons of including/excluding layers are made in the same random network. we considered the epidemic began on february 26, which is the day the first confirmed case was officially reported. it is important to emphasize that we made various optimistic assumptions throughout the model construction and simulation, such as to consider that people are behaving with more caution by reducing direct contact, wearing masks, and doing proper home/hospital isolation when infected. it is also important to notice that we are not considering the number of available icu/regular hospitalization beds for the death count, i.e. all the critical and severe cases are effectively treated. it is not trivial to estimate the direct impact of these numbers on the epidemic, however, this is an essential factor that directly impacts the number of deaths. here we focus on the impacts of different actions on the overall epidemic picture, such as the increase and reduction of cases, deaths, and occupied beds in hospitals. the social network starts normally, with all its layers and the original infection probabilities. the infection starts at a node with the closest degree to the average network degree and propagates at iterations of 1 day (up to 300 days). we consider an optimistic scenario, in which people are aware of the virus since the beginning, thus the initial infection probability is β = 0.3. this represents a natural social distancing, a reduction of direct contacts that could cause infection (hugs, kisses, and handshakes), and also precautions when sneezing, coughing, etc. we empirically found that this initial value of β yields results with a higher correlation to the brazilian pandemic. a moderated quarantine is applied after 27 days, representing the isolation measures applied on march 24 by most brazilian states, such as são paulo [1]. to simulate this quarantine we remove the layers of religious activities and schools and reduce the contacts on transports and work down to 30% of its initial value, i.e. β = 0.09. the remaining activities on these layers represent services that could not be stopped, such as essential services, activities that are kept taking higher precautionary measures, and also those who disrespect the quarantine. we compare the output of the model in the first 83 days with real data available from the brazilian epidemic (up to may 18) [24, 34, 42, 43] . the model achieves a significant overall similarity within its standard deviation. the greatest difference in the number of diagnosed cases at the last 10 days may be related to the increase in the number of tests being performed in brazil, or yet, the constant decrease of isolation levels in the country (below 50% for most days of the past month) [22] . we considered here a fixed isolation level around what was observed in the first days after the government decrees in brazil, but data in ref. shows that these levels are constantly changing. therefore, the number of diagnosed cases and deaths for the remaining simulation may be greater than the reported on this paper (see the "keep isolation" scenario in the next section). concerning the daily death toll, the average number of the proposed model is greater than the official numbers. this is somehow expected, considering that the underdetection rates may be greater in contrast to the fewer number of tests being performed. to better understand this, we analyzed the number of death in brazil from january 1 to april 30, comparing cases between 2019 and 2020, the results are shown in figure 5 . it is possible to observe a clear increasing pattern after february 26, which is the day of the first officially confirmed case of covid-19 in brazil. this indicates that the real death toll for the disease may be significantly greater than the official numbers. [24] , and world health organization (who) [42] . the dotted lines represent the standard deviation, in the case of the real data the curve is the average over a 5-day window, and the solid lines the real raw data. the greatest average number of deaths produced by the proposed model may be related to underdetection (see figure 5 ). [2] . then the total death difference is compared to the covid-19 records of the who [42] and the brazilian government [2] data. the largest difference that appears right after the first confirmed case may indicate a significant underdetection of covid-19 cases. after the initial epidemic phase, we consider 4 possible actions that can be taken after 90 days (may 26): a) do nothing more, maintaining the current isolation levels; b) stop isolation, returning activities to normal (initial network layers and weights); c) return only work activities, restoring the initial probability of the layer; or d) increase isolation, stopping the remaining activities in the work and transports layers (home and random remains). firstly, we analyze the impacts on the number of daily new cases and deaths, results are shown in figure 6 . as previously mentioned, at the start of the covid-19 pandemic, brazil was performing a fewer number of tests by an order of magnitude, in comparison to other countries with similar epidemic numbers, therefore we considered as diagnosed only the severe and critical cases, which are pronounced subjects for testing, and 20% of the mild cases. the total infection ratio is discussed later. considering keeping the current isolation levels, the peak of daily new cases occurs around 100 days after the first case (june 5), with around 11,000 confirmed cases. after 202 days (september 15), the average daily cases is around 500, and it goes below 100 daily cases after around 237 days (october 19). the peak of daily new deaths occurs around 118 days (june 23), with an average of 1900 deaths, and goes below 100 new occurrences after around 210 days (september 24). it is important to stress that this is a hypothetical scenario where the isolation level remains the same from day 27 to 300, which is hardly true in the real world where it is constantly changing [22] . the total numbers after the last day (300) account for 946,830 (±10, 507) diagnosed cases and 149,438 (±3, 124) deaths. when we consider the return of all activities after 90 days, the number of cases and deaths grows significantly in an exponential fashion. the peak occurs at 108 days (june 13) with an average of 40,937 (± 11,010) new cases, and at 122 days (june 27) with an average of 6,484 (± 1,739) new deaths. although the peak of cases/deaths and the decrease of the numbers occur early, in this case, the final result is critically worse, with a total of 1,340,367 (± 18,513) diagnosed cases and 212,105 (± 4,359) deaths. here it is important to notice that we considered that all the activities return after 90 days and remain fully operational until the last day (300). moreover, we do not account for the overloading of hospitals, which directly impacts the final death count. therefore, the number of deaths may be considerably higher. another possible scenario is the return of only the work layer, keeping reduced transports and no schools and religious activities, however, the pattern is similar to returning all activities, considering the growth time, peak, and decay time. the final numbers in this case are 1,253,119 (± 26,009) diagnosed cases and 197,756 (± 5,693) deaths. if the isolation is strictly increased after 90 days (lockdown), the infection and death counts drop significantly in comparison to the other approaches. moreover, the recovering time is much faster, as daily new cases stop earlier than the other scenarios. the peak of daily new cases happens around day 93 (june 1), and of daily new deaths around day 106 (june 11). the total numbers of diagnosed cases and deaths after day 300 are, respectively, 552,855 (± 195,802) and 87,059 (± 30,871). considering the hospitalization time described in the scheme of figure 3 it is possible to estimate the number of occupied beds for regular hospitalization (severe cases) and icu/ventilation (critical cases). we also show the difference between the cumulative growth of diagnosed and undiagnosed cases and recovered cases. the same approach as the previous experiment is considered (except for "return work") with 3 possible actions after 90 days (may 26), results are shown in figure 7 . the overall pattern of results is similar to the previously observed for the number of diagnosed cases and deaths. it is possible to notice that the number of undiagnosed cases is much higher than the diagnosed cases. this reflects the number of asymptomatic cases and the lack of tests for mild cases. in the worst scenario, which means ending the isolation, the total infected number may go above 5 million cases. the recovered rate is directly proportional to the infected rate, as one needs to be infected to either die or become resistant to the disease. if the infected rate is high, so is the recovered rate, e.g. the scenarios of keeping or ending isolation, and a high recovered rate also helps in mitigating the epidemic propagation (natural immunization). however, increasing isolation decreases the propagation much faster than natural immunization, with a considerably smaller death toll. it is also possible to observe the differences at the start of effective recovering, i.e. when the recovered rate surpasses the infected rates, this is due to the early increase in isolation levels. the peak of hospitalization occupancy occurs around a week before the death peaks, in any scenario. in this case, icus are very important because critical patients are treated there, which represents the cases of higher death rates. within the "end isolation" setting, patients may occupy up to an average of 215,285 (± 48,682) regular beds and 109,520 (± 24,647) icu beds. these numbers are by far greater than entire brazil's capacity, as publicly-available and private icu beds sum up to 45,848 [3] . even considering the better scenario, i.e. the lower bound of the standard deviation, the number of occupied icu beds may reach around 86,000, which is also critical for brazil's capacity (almost 2 times it's capacity). in this setting of "end isolation", the healthcare system would surely collapse. when the isolation levels are kept, the numbers are significantly lower. however, the occupancy of 66,110 (± 16,759) regular beds and 33,470 (± 7,926) icu beds is still critical for the brazilian health system. considering the creation of new provisional icu units and good patient logistics, the situation may still remain under control during the peak of hospitalization occupancy. however, the results show that the hospital occupancy is prolonged considerably in this scenario, and they may stay functioning around their maximum capacity for up to a month (with an average of occupied icu beds above 30,000). when increasing the isolation the peak of occupied beds is smaller, with an average of 63,226 (± 20,682) regular beds and 31,816 (± 10,592) icu beds. moreover, the shape of the curve throughout the days is different and the final numbers are considerably smaller. the peak also occurs around a week earlier and then decreases much faster. this scenario would be preferable as it has much more chances of not overloading the brazilian healthcare system, relieving the hospital occupancy considerably faster and, therefore, contributing to the reduction of the number of deaths. this work presents a new approach for the modeling of the covid-19 epidemic dynamics based on multi-layer complex networks. each node represents a person, and edges are social interactions divided into 6 layers: home, work, transports, schools, religions, and random relations. each layer has its own characteristics based on how people usually interact in that activity. the propagation is performed using an agent-based technique, a modification of the sir model, where weights represent the infection probability that varies depending on the layers and the groups the node interacts, scaled by a β term that controls the chances of infection. the network structure is built based on demographic statistics of a given country, region, or city, and the propagation simulation is performed at time iterations, that represent days. here, we studied in depth the case of the brazilian epidemic considering its population properties and also specific events, such as when the first isolation measures were taken, and the impacts of future actions. brazil is a large and populated country with a wide variety of geographical location types, climates, and it also has a lengthy border with other countries to the west. it is a challenging setting for any epidemiological study. here we consider an average over all the country population, as we adjust the model output to match some statistics of the epidemic official reports. brazil is performing fewer tests in comparison to other countries at the same epidemic scale, however, it is known that testing for infection is always limited, either due to the low number of tests or to the velocity of infections which the testing procedure cannot keep up to. we then considered that only hospitalization cases and 20% of the mild cases are diagnosed. asymptomatic cases are not diagnosed and keep acting normally in the network, considering the active layers. regarding the isolation of infected nodes, we take some optimistic assumptions: mild cases (even those not diagnosed) are aware of its symptoms and isolate themselves at home. severe and critical cases are eventually hospitalized, and then fully isolated from the network (removal of all its edges). under the described scenario, the network starts with all its layers and β = 0.3, representing that people are aware of the virus since the beginning (even before isolation measures). after 27 days of the first confirmed case, the first isolation measures are taken where schools and religious activities are stopped and work and transports keep functioning at 30% of the initial scale (achieved further reducing the β term). different actions are then considered after 90 days of the first case: keep the current isolation levels, increase isolation, end isolation returning all activities to 100%, or returning only the work activities. the results show that keeping approximately the current isolation levels results in a prolonged propagation, as we are near the estimated peak (around june 5) with an average of 11,000 daily new cases and 1900 daily new deaths, and an average of 946,830 diagnosed cases (up to 3,6 million infected) and 149,438 deaths until the end of the year. in this scenario, hospitals may exceed its maximum capacity around june 11, but the efficient implementation of new icu beds and good logistic management of patients may still keep the situation under control. however, this is a very optimistic assumption, considering that our definition of "keep isolation" considers social isolation above 50% as registered at the beginning of the brazilian quarantine [22] . the social isolation levels in brazil are constantly decreasing even when we are still in a state of moderated quarantine, and it is possible to observe average isolation below 50% in most days of the past month (middle of april to middle of may 2020). moreover, the results show that this prolonged scenario may cause hospitals to keep functioning at maximum capacity for up to a month. when analyzing other possible scenarios the situation may be considerably different. relaxing isolation measures from now on causes an abrupt increase in the daily growth of cases and deaths, up to 5 times higher in comparison to the current isolation levels. even if only work activities return while schools, religion, and transport activities remain inactive/reduced, the impact is very similar to returning all the activities, with a possible number of above 1,34 million diagnosed cases (up to 5,2 million infected), and around 212,105 deaths until the end of the year. this is, again, a very optimistic assumption as we do not consider the hospital overflow to calculate the death toll. considering this aspect, icu beds may be fully occupied in early june, and around the middle of the month their demand may reach up to 134,000 beds, which is around 3 times higher than the entire country's capacity. the other alternative, which is the increase of isolation levels (lockdown), appears to be the only alternative to stop the healthcare system from entering a very critical situation. in this scenario, the growth in the number of daily cases and deaths would be mitigated, and faster. as we are near the peak of new cases at current isolation levels, estimated to be between the beginning and middle of june, increasing the isolation levels does not cause a significant impact on when the peak occurs or its magnitude. however, the disease spreading and the occurrences of new cases decrease much faster in this scenario in comparison to any other scenario studied here, with a difference of months. moreover, the final numbers are considerably smaller, with an average of 552,855 diagnosed cases (up to 2.1 million infected) 87,059 deaths until the end of the year. although the proposed method includes various demographic information for the network construction, and an improved sir approach to covid-19, it still does not cover all factors that impact the epidemic propagation. as future works, one may consider more information such as the correlation between the age distribution within the social organization and the clinical spectrum of the 4 infection types (e.g. severe and critical cases are mostly composed of risk groups). another possible improvement consists of increasing n (number of nodes of the networks), e.g. using a value near the real population of the studied society, which we avoided here due to hardware and time constraints (graph processing is costly). another important point regarding the obtained results is related to the "keep isolation" scenario, which may be underestimated as we take various optimistic assumptions and also consider a fixed isolation level based on previously observed data, while most recent data shows that these levels are decreasing [22] . therefore, during the network evolution, a possible improvement is the use of dynamic isolation levels to better represent reality. it is also possible to consider various scenarios for future actions, such as 2 or more measures of increasing/reducing isolation. this may allow the discovering of new epidemic waves if social activities return too soon after the isolation period, such as what happened in 1918 with the spanish flu. portal da transparência -painel covid registral amib. brazilian intensive care medicine association: updated data on icu beds in brazil, 2020, visited on 2020-05-08 infectious diseases of humans: dynamics and control incubation period of 2019 novel coronavirus (2019-ncov) infections among travellers from wuhan, china presumed asymptomatic carrier transmission of covid-19 the mathematical theory of infectious diseases and its applications emergence of scaling in random networks modeling and forecasting the covid-19 pandemic in brazil covid-19 in critically ill patients in the seattle region-case series demand for hospitalization services for covid-19 patients in brazil analyzing and modeling real-world phenomena with complex networks: a survey of applications complex networks: the key to systems biology data analysis and modeling of the evolution of covid-19 in brazil epidemic spreading with awareness and different timescales in multiplex networks impact of non-pharmaceutical interventions (npis) to reduce covid19 mortality and healthcare demand clinical features of patients infected with 2019 novel coronavirus in wuhan, china tabela 137 -população residente, por religião tabela 185 -domicílios particulares permanentes por situação e número de moradores ibge. pesquisa nacional por amostra de domicílios contínua trimestral: tabela 5918 -população, por grupos de idade instituto nacional de estudos e pesquisas educacionais anísio teixeira: dados do censo escolar: ensino médio brasileiro tem média de 30 alunos por sala instituto de pesquisa econômica aplicada: sistema de indicadores de percepção social (sips) coronavirus resource center asymptomatic carrier state, acute respiratory disease, and pneumonia due to severe acute respiratory syndrome coronavirus 2 (sarscov-2): facts and myths positive rt-pcr test results in patients recovered from covid-19 the incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data the small world problem portal do covid-19 estimating the asymptomatic proportion of coronavirus disease 2019 (covid-19) cases on board the diamond princess cruise ship, yokohama, japan epidemics and percolation in small-world networks scaling and percolation in the small-world network model covid-19 coronavirus data model studies on the covid-19 pandemic in sweden the epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (covid-19)-china clinical characteristics of asymptomatic and symptomatic patients with mild covid-19 collective dynamics of 'small-world'networks world health organization -coronavirus disease 2019 (covid-19): situation report, 73 world health organization -modes of transmission of virus causing covid-19: implications for ipc precaution recommendations world health organization coronavirus disease (covid-19) dashboard clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study key: cord-022176-hprwqi4n authors: löscher, thomas; prüfer-krämer, luise title: emerging and re-emerging infectious diseases date: 2009-07-28 journal: modern infectious disease epidemiology doi: 10.1007/978-0-387-93835-6_3 sha: doc_id: 22176 cord_uid: hprwqi4n emerging infectious diseases (eids) are characterized by a new or an increased occurrence within the last few decades. they include the following categories emerging diagnosis of infectious diseases: old diseases that are newly classified as infectious diseases because of the discovery of a responsible infectious agent. europe including great britain as well as in india, china, and japan. emerging vector-borne disease events concentrated in densely populated subtropical and tropical regions mostly in india, indonesia, china, sub-saharan africa, and central america (see figs. 3.3, 3.4 , and 3.5). the identification of new infectious agents in old diseases with unknown etiology is still the basis in many epidemiological studies. such newly detected bacteria and viruses in the last few decades are listed in table 3 .1. since the detection of helicobacter pylori in 1983, this infection has been identified as the causative agent in 90% of b-gastritis cases. the risk of duodenal ulcer is increased by 4-25-fold in patients with helicobacter-associated gastritis. who declared h. pylori as a carcinogen of first order because of its potential to enhance the risk of stomach carcinoma and malt lymphoma in long-term infection. in highprevalence regions for h. pylori, the frequency of stomach carcinoma is significantly higher compared to low-endemic areas (correa et al. 1990 ). the identification of h. pylori facilitates curative treatment of most associated diseases in individuals. but the most important epidemiological effect on associated diseases is attributed to increased hygienic standards in industrialized countries with a substantial reduction of h. pylori prevalences in younger age cohorts. transmission of h. pylori occurs mainly in childhood. in western developed countries the overall prevalence is around 30%, higher in older age groups due to a cohort effect, and this increases with low socioeconomic status (rothenbacher et al. 1989 ). in countries with low hygienic standards the prevalences are still high in younger age groups and reach 90% in developing countries. in developed countries, migrant subpopulations from less-developed regions show significantly higher prevalences in comparison to the nonmigrant population (mégraud 1993 ). since the early 20th century, a characteristic expanding skin lesion, erythema migrans (em), and an arthritis associated with previous tick bites were known. borrelia for many decades. increased outdoor activities facilitated contacts between humans and ticks in the 1970s and the 1980s and increased transmission of borrelia to humans at the northeastern coast of north america, leading to the discovery of borrelia burgdorferi in 1981 by willy burgdorfer. three different stages of the disease that describe the stage of infection and the involvement of different organ systems are known: stage 1, early localized infection; stage 2, early but disseminated infection; and stage 3, late stage with persistent infection. lyme disease is endemic at the east coast and in minnesota in the united states, in eastern and central europe, and russia. seroprevalence rates that reflect about 50% of nonclinical infections vary between 2 and 18% in the general population in germany (hassler et al. 1992; weiland et al. 1992) . in high-risk groups like forest workers in germany the prevalences reach 25-29% (robert koch institute 2001a). in ticks (ixodes) the prevalences are between 2 and 30% depending on the geographical area and the testing method used [immunofluorescence test, ift and polymerase chain reaction (pcr)]. in most studies the main risk factors of infection are age (children: 4-9 years, adults 35-60 years), outdoor activities, skin contacts with bushes and grass, and the presence of ticks in domestic animals (robert koch institute 2001b) . the probability of infection (seroconversion) after a tick bite in germany is 3-6% and the probability of a clinical disease is 0.3-1.4%. the probability that the bite of an infectious tick leads to infection in the host is 20-30%. this depends on the time duration that the tick is feeding on the human body. since the detection of the etiologic infectious agent and the subsequent development of laboratory diagnostic tests in the 1980s, the number of reported cases of lyme disease has increased from 0 to 16,000 per year, indicating that it is an "emerging diagnosis." the reported numbers vary depending on the reproduction of the hosting rodents for ticks as well as the contacts between humans and nature (spach et al. 1993) . ticks may live for several years and their survival, reproduction rate, and activity are directly affected by changes in seasonal climate through induced changes in vegetation zones and biodiversity, hence causing local alterations of the tick's habitat and in the occurrence of animals that are carriers of different pathogens (like small rodents). several studies in europe have shown that in recent decades the tick ixodes ricinus, transmitting lyme borreliosis and tick-borne encephalitis (tbe), has spread into higher latitudes (e.g., sweden) and altitudes (e.g., czech republic, austria), and has become more abundant in many places. such variations have been shown to be associated with recent variations in climate. as a result, new risk areas of both diseases have recently been reported from the czech republic. climate change in europe seems likely to facilitate the spread of lyme borreliosis and tbe into higher latitudes and altitudes, and to contribute to extended and more intense transmission seasons. currently, the most effective adaptive strategies available are tbe vaccination of risk populations and preventive information to the general public (danielova et al. 2004; lindgren et al. 2006; materna et al. 2005 ). an effective vaccine was licensed for b. burgdorferi in 1999. in europe, where different variants of borrelia are present (mostly b. afzelii and b. garinii), this vaccine is not protective. trivalent vaccines for europe are in clinical trials. in recent years, norovirus infections are increasingly recognized as the cause of large outbreaks of diarrheal diseases in the general population, school classes, nursing homes, hospitals, and cruise ships in western countries with peaks in colder seasons (winter epidemics) (centers of disease control 2006; verhoef et al. 2008; robert koch institute 2008a) . this is a typical example for emerging diagnosis due to increasing availability of routine pcr testing for these viruses in stool samples. noroviruses (family caliciviridae) are a group of related, single-stranded rna viruses first described in an outbreak of gastroenteritis in a school at norwalk, ohio, in 1968. five genogroups are known. immunity seems to be strain specific and lasts only for limited periods, so individuals are likely to get the infection repeatedly throughout their life. it is estimated that noroviruses are the cause of about 50% of all food-borne outbreaks of gastroenteritis. for several years there has been an ongoing epidemic in several european countries due to drift variants of a new genotype (gg ii.4jamboree) previously unknown to this nonimmune population (robert koch institute 2008a). as a result of an analysis of 232 outbreaks in the united states between 1997 and 2000, direct contamination of food by a food handler was the most common cause (57%), person-to-person transmission was less prevalent (16%), and even less frequently waterborne transmission could be proved (3%) (centers for disease control 2006). vomiting is a frequent symptom of norovirus enteritis and may result in infectious droplets or aerosols causing airborne or contact transmission. this may explain the difficulty to stop outbreaks in hospitals, nursing homes, and similar settings despite precautions to prevent fecal-oral transmission. also on cruise ships, person-to-person transmission is most likely in those closed settings, and drinking tap water is a risk factor as well (verhoef et al. 2008 ). searching for an agent which causes large outbreaks of enterically transmitted non-a hepatitis in asia and other parts of the world, the hepatitis e virus (hev) was first described in 1983 and cloned and sequenced in 1990 (reyes et al. 1990 ). meanwhile, hev has been shown to be a zoonotic virus circulating in pigs and other animals. it is implicated in about 50% of sporadic cases of acute hepatitis in developing countries and associated with a high case fatality rate in the third trimester of pregnancy (10-25%). hev is a major cause of large epidemics in asia, and to a lesser extent in africa and latin america, typically promoted through postmonsoon flooding with contamination of drinking water by human and animal feces. recent data show hev also to circulate in european countries and to be associated with severe and fatal disease not only during pregnancy but also in the elderly and in patients with chronic liver conditions. in patients with solid organ transplants, hev may even cause chronic hepatitis and liver cirrhosis (kamar et al. 2008) . a recombinant hev vaccine candidate has demonstrated a high protection rate of approximately 95% during clinical trials in nepal (shrestha et al. 2007 ). for 30 years, specific human papillomaviruses have been linked to certain human cancers and have been identified as causative agents of malignant proliferations. in the 1980s the detection of papillomavirus dna from cervical carcinoma biopsies were published, showing that hpv types 16 and 18 are the most frequent (dürst et al. 1983; boshart et al. 1984) . the relation of hpv infections and cancer is further discussed in chapter 23. definition: only infections that are newly discovered in humans are listed in this chapter: hiv, new variant of creutzfeldt-jakob disease (vcjd), hemorrhagic uremic syndrome (hus) caused by enterohemorrhagic escherichia coli, viral hemorrhagic fevers like hanta, lassa, ebola, and marburg fever, nipah virus encephalitis, monkeypox, human ehrlichiosis, severe acute respiratory syndrome (coronavirus infection, sars), and avian influenza (h5n1) (see fig. 3 .1 and table 3 .2). these infections mostly have their origin in zoonotic wildlife (e.g., avian influenza, monkeypox, hantavirus, nipah virus, and filoviruses) or livestock (e.g., vcjd). factors promoting the spread of these infections in humans are contacts with wildlife, mass food production of animal origin, and globalization (migration, transportation of goods and vectors) (see fig. 3 .2). in addition, new strains or variants of well-known pathogens have emerged showing increased or altered virulence such as clostridium difficile ribotype 027 or staphylococcus aureus strains expressing the panton-valentine leukocidin (see also chapter 22). the epidemiology of hiv is treated in chapter 18 and that of avian influenza and new influenza h1n1 in chapter 16. in the year 1995, 3 years after the peak of the bse epidemic in the united kingdom, with an annual incidence rate in cows of 6.636 per million bovines aged over 24 months, the first mortalities in humans with a new variant of creutzfeldt-jakob disease were observed in the united kingdom. until 2007, smaller incidence rates of bse cases had been reported by 21 other european countries in indigenous bovines and up to more than 43,000 per million in 2004 in ireland. from 1999, bse started to increase in switzerland and portugal, from 2004 in spain and in recent years has spread to eastern european countries (organisation mondiale de la santé animale 2008). the infectious agent is a self-replicating protein, a "prion." the source of infection for cows is infectious animal flour. the transmission to humans occurs through oral intake of cow products, most likely undercooked meat and nerval tissues as well as transplants of cornea, dura mater, contaminated surgical instruments, or the treatment with hypophyseal hormones extracted from animal tissues. after a statuary ban on the feeding of protein derived from ruminants to any ruminant and the export ban of all cow products from england, the epidemic of bse in cows and the occurrence of human infections decreased in the united kingdom since 2004. by june 2008 the total number of deaths in definite/probable cases of vcjd in the united kingdom was 163 (the national creutzfeldt-jakob disease surveillance unit 2008). only a few numbers of vcjd were reported from other european countries and the united states (who 2008). nipah virus encephalitis was first observed in 1997/98 in malaysia. the disease was transmitted by pigs to laborers in slaughterhouses and showed a lethality of 40%. the infectious agent was detected in 1999 (chua et al. 2000; lam and chua 2002) . since then, several outbreaks of nipah virus infections have been observed in asian countries: singapore in 1999, india 2001 , and bangladesh since 2003 (who 2004a harit et al. 2006) . the virus has been isolated repeatedly from various species of fruit bats, which seem to be the natural reservoir (yob et al. 2001 ). west nile is a mosquito-borne flavivirus that was first isolated from a woman with a febrile illness in uganda in 1937. from the 1950s, west nile fever endemicity and epidemics started being reported from africa and the middle east. severe neurological symptoms were thought to be rare. more recent epidemics in northern africa, eastern europe, and russia suggested a higher prevalence of meningoencephalitis with case fatality rates of 4-13%. in 1999, west nile virus was identified as the cause of an epidemic of encephalitis at the east coast of the united states (nash et al. 1999) . a seroepidemiological household-based survey showed that the first outbreak consisted of about 8,000 infections of which about 1,700 developed fever and less than 1% experienced neurological disease ). since then, epidemics occur during summer months in north america each year, with an estimated 35,000 febrile illnesses and over 1,200 encephalitis or meningitis cases in the united states in 2007 (centers for disease control 2008). age above 50 years is the main risk factor for developing severe disease. the virus is transmitted mainly by culex mosquitoes, but also by sandflies, ceratopogonids, and ticks, with birds as reservoir hosts and incidental hosts such as cats, dogs, and horses. efforts are made to reduce the transmitting mosquito population and to prevent mosquito bites through personal protection as well as to prevent transmission through blood donations by screening (centers of disease control 2008). the first case of sars occurred in guangdong (china) in november of 2002, leading to an outbreak with 7082 cases in china and hong kong (8096 cases worldwide) until july 2003. the case fatality rate was 9.6%. a new coronavirus (sars-cov) was identified as the causative agent (drosten et al. 2003) , being transmitted first by infected semidomesticated animals such as the palm civet and subsequently from human to human. some cases were exported to other countries, causing smaller outbreaks there, canada being the most affected country outside asia with 251 cases, before control of transmission was effective. eight thousand and ninety-six cases were reported worldwide, until july 2003, then further transmission stopped (besides one more case of laboratory transmission in 2004), indicating an efficient international cooperation in disease control (who 2004b) . recently, sars-cov has been found in horseshoe bats, which seem to be the natural reservoir of the virus. about 150,000-200,000 cases of hemorrhagic fever with renal syndrome (hfrs) caused by hantaviruses are reported annually worldwide, with more than half in china, many from russia and korea, and numerous cases from japan, finland, sweden, bulgaria, greece, hungary, france, and the balkan with different death rates depending on the responsible virus, ranging from 0.1% in puumala to 5-10% in hantaan infections (schmaljohn and hjelle 1997) . hantaviruses are transmitted from rodent to rodent through body fluids and excreta. only occasionally do humans get infected. different types of hantaviruses are circulating in europe and the eastern hemisphere, predominantly puumala virus, dobrava virus, and tula virus, adapted to different mouse species. depending on the virus type the case fatality rate is between 1 and 50%. as an example, the annual rate of reported cases in germany was about 100 cases per year from 2001 onward. this started to change in 2005 with 448 reported cases and rose dramatically to 1687 cases in 2007. that year, hantavirus infections were among the five most reported viral infections in germany. reasons for the rise in human infections were an increase in the hosting rodent population due to a very mild winter 2006/2007 and an early start of warm temperatures in spring which led to favorable nutritional situations for the mice influencing their population dynamics. in addition, favorable climatic conditions enhanced the outdoor behaviors of humans facilitating transmission in rural areas (robert koch institute 2008b; hofmann et al. 2008) . since 1993, a previously unknown group of hantaviruses (sin nombre, new york, black creek canal, bayou-in the united states and canada; andes, in south america) emerged in the americas as a cause of hantavirus pulmonary syndrome (hps), an acute respiratory disease with high case fatality rates (approx. 35%), causing a new, significant public health concern. a total of 465 cases had been reported until march 2007 in 32 states, most of them in the western part of the united states (centers for disease control 2007). lassa virus was detected for the first time in 1969 during an outbreak affecting nurses in a missionary hospital in lassa, nigeria. however, the disease had previously been described in the 1950s. lassa virus is enzootic in a common peridomestic rodent in west africa, the multimammate rat mastomys natalensis, which is chronically infected and sheds the virus in urine and saliva. human infection through direct or indirect contact with rats or their excretions is rather common in some west african countries and estimates from seroepidemiological and clinical studies suggest that there are several hundred thousand cases annually. however, only a minority of infections seems to progress to severe hemorrhagic disease with a case fatality rate of 5-30% in hospitalized cases. the virus can be transmitted by close person-to-person contact and nosocomial spread has been observed under poor hygienic conditions. marburg and ebola viruses, which were first detected during outbreaks in 1967 and 1975, respectively, have so far been observed only during several limited outbreaks and a few isolated cases in certain countries of sub-saharan africa. however, very high case fatality rates (25-90%), the occurrence of outbreaks that were difficult to control in resource-poor settings, and the obscure origin of these viruses have attracted considerable public interest worldwide. recently, evidence was found for both marburg and ebola viruses to occur in certain species of bats that probably constitute the natural reservoir of these filoviruses (towner et al. 2007 ). although the disease burden of these viral hemorrhagic fevers is low, they gained considerable international attention due to -their high case fatality rates, -the risk of person-to-person transmission, -several imported cases to industrialized countries, and -fears of abuse of these agents for bioterrorism. as a consequence, considerable resources have been invested, even in nonendemic countries, in the setting up of task forces and high containment facilities for both laboratory diagnostic services and treatment of patients using barrier nursing. this highly virulent strain of c. difficile expresses both cytotoxins a and b and, in addition, the binary toxin cdt, an adp-ribosyltransferase. due to a deletion in the regulatory tcdc gene, the synthesis rates of toxin a and b are increased by 16-and 23-fold, respectively. this strain was detected in 2000 for the first time in pittsburgh, usa. since then it has spread to canada, and in 2003 it reached europe causing multiple outbreaks in hospitals and nursing homes (warny et al. 2005) . c. difficile 027-associated colitis has shown high case fatality rates (10-22%) and an increased relapse rate. containment of outbreaks in hospitals and other institutions necessitates isolation of patients or cohorts and strict hygienic measures. during recent decades, a large variety of well known infectious diseases has shown regional or global re-emergence with considerable public health relevance (table 3. 3). globally, tuberculosis is probably the most important re-emerging infectious disease. in developing countries, tb infection still is extremely common and, in the wake of the hiv pandemic, the percentage of those developing overt disease has increased dramatically. worldwide, tb is the most common opportunistic infection in patients with aids. the significance of tb and hiv/tb coinfection is reviewed in chapters 16 and 18. the re-emergence of some infectious diseases is closely related to the lack or the breakdown of basic infrastructures as seen in periurban slums and in refugee camps in developing countries, or as a consequence of war, breakdown of the civil society, or natural or man-made disasters. cholera is a formidable example for both re-emergence and epidemic spread under those conditions. another important group of re-emerging infectious diseases is caused by various vector-borne infections, such as malaria, dengue fever, and yellow fever. these major vector-borne diseases are treated in more detail in chapter 21. in addition, there are a variety of re-emerging infections transmitted by arthropod vectors such as various arboviral diseases and some protozoal diseases other than malaria (i.e., leishmaniasis, human african trypanosomiasis). the reasons for the emergence of several vector-borne diseases are rather variable and may range from climatic factors (e.g., global warming, rainfall), lack or breakdown of control, to changes in agriculture and farming and in human behavior (e.g., outdoor activities). these factors are usually quite specific for each of these diseases and largely depend on the specific ecology of the agent, its vectors, and reservoirs. cholera, an acute diarrheal infection transmitted by fecally contaminated water and food, had been endemic for centuries in the ganges and brahmaputra deltas in the 19th century before it started to spread to the rest of the world. since 1817, six pandemics caused by the classical biotype of vibrio cholerae were recorded that killed millions of people across europe, africa, and the americas. it has been a major driving force for the improvement of sanitation and safe water supply. the seventh pandemic was caused by the el tor biotype, first isolated from pilgrims at the el tor quarantine station in sinai in 1906. it started in 1961 in south asia, reached africa in 1971, and is still ongoing. after more than hundred years, cholera spread to the americas in 1991, and beginning in peru, a large epidemic hit numerous latin american countries with 1.4 million cases and more than 10,000 fatalities reported within 6 years. out of the 139 serogroups of v. cholerae, only o1 and o139 can cause epidemics. the serogroup o139, first identified in bangladesh in 1992, possesses the same virulence factors as o1 and creates a similar clinical picture. currently, the presence of o139 has been detected only in southeast and east asia, but it is still unclear whether v. cholerae o139 will extend to other regions. since 2005, the re-emergence of cholera has been noted in parallel with the everincreasing size of vulnerable populations living in unsanitary conditions. cholera remains a global threat to public health and one of the key indicators of social development. while the disease is no longer an issue in countries where minimum hygiene standards are met, it remains a threat in almost every developing country. the number of cholera cases reported to the who during 2006 rose dramatically, reaching the level of the late 1990s. a total of 236,896 cases were notified from 52 countries, including 6,311 deaths, an overall increase of 79% compared with the number of cases reported in 2005. this increased number of cases is the result of several major outbreaks that occurred in countries where cases had not been reported for several years such as sudan and angola. it is estimated that only a small proportion of cases -less than 10% -are reported. the true burden of disease is therefore grossly underestimated. the absence or the shortage of safe water and sufficient sanitation combined with a generally poor environmental status are the main causes of spread of the disease. typical at-risk areas include periurban slums where basic infrastructure is not available, as well as camps for internally displaced people or refugees where minimum requirements of clean water and sanitation are not met. however, it is important to stress that the belief that cholera epidemics are caused by dead bodies after disasters, whether natural or manmade, is false. on the other hand, the consequences of a disaster-such as disruption of water and sanitation systems or massive displacement of population to inadequate and overcrowded camps-will increase the risk of transmission. chikungunya virus, an arbovirus belonging to the alphavirus group, is transmitted by various mosquitoes. the virus was first isolated in tanzania in 1952 and since then has caused smaller epidemics in sub-saharan africa and parts of asia with low public health impact. in 2005, the largest epidemic ever recorded started in east africa, spread to réunion and some other islands of the indian ocean, and then spread further to asia, with more than 1.5 million cases in india alone so far. characteristics of the disease are high fever and a debilitating polyarthritis, mainly of the small joints that can persist for months in some patients. now, for the first time, severe and fatal cases have been observed that may be due to certain mutations of the epidemic strain (parola et al. 2006) . the asian tiger mosquito aedes albopictus has proved to be an extremely effective vector in recent epidemics causing high transmission rates in big cities and leading to epidemics with high public health impact. this southeast asian mosquito species has been shipped by transport of used tires and plants harboring water contaminated with larvae to other continents and, since 1990, ae. albopictus has successfully spread in italy and other parts of southern europe. in august 2007, an outbreak of chikungunya fever occurred in northern italy with more than 200 confirmed cases. the index case was a visitor from india who fell ill while visiting relatives in one of the villages and further transmission was facilitated by an abundant mosquito population during that time, as a consequence of seasonal synchronicity (rezza et al. 2007 ). ross river virus (rrv) is another arbovirus of the alphavirus group that causes an acute disease with or without fever and/or rash. most patients experience arthritis or arthralgia primarily affecting the wrist, knee, ankle, and small joints of the extremities (epidemic polyarthritis). about one-quarter of patients have rheumatic symptoms that persist for up to a year. the disease can cause incapacity and inability to work for months. it is the most common arboviral disease in australia with an average of almost 5,000 notified cases per year. rrv is transmitted by various mosquito species and circulates in a primary mosquito-mammal cycle involving kangaroos, wallabies, bats, and rodents. a human-mosquito cycle may be present in explosive outbreaks which occur irregularly during the summer months in australia and parts of oceania. heavy rainfalls as well as increasing travel and outdoor activities are considered as important factors contributing to the emergence of rrv epidemics. this flavivirus is transmitted by certain culex mosquitoes and is a leading cause of viral encephalitis in asia with 30,000-50,000 clinical cases reported annually. it occurs from the islands of the western pacific in the east to the pakistani border in the west, and from korea in the north to papua new guinea in the south. only 1 in 50-200 infections will lead to encephalitis, which is, however, often severe with fatality rates of 5-30% and with a high incidence of neurological sequelae. despite the availability of effective vaccines, je causes large epidemics and has spread to new areas during recent decades (e.g., india, sri lanka, pakistan, torres strait islands, and isolated cases in northern australia). je is particularly common in areas where flooded rice fields attract water fowl and other birds as the natural reservoir and provide abundant breeding sites for mosquitoes such as culex tritaeniorhynchus, which transmit the virus to humans. pigs act as important amplifying hosts, and therefore je distribution is very significantly linked to irrigated rice production combined with pig rearing. because of the critical role of pigs, je presence in muslim countries is low. crimean-congo virus is a bunyavirus causing an acute febrile disease often with extensive hepatitis resulting in jaundice in some cases. about one-quarter of patients present hemorrhages that can be severe. fatality rates of 7.5-50% have been reported in hospitalized patients. cchf is transmitted by hyalomma ticks to a wide range of domestic and wild animals including birds. human infection is acquired by tick bites or crushing infected ticks, and also by contact with blood or tissue from infected animals that usually do not become ill but do develop viremia. in addition, nosocomial transmission is possible and is usually related to extensive blood exposure or needle sticks. human cases have been reported from more than 30 countries in africa, asia, southeastern europe, and the middle east. in recent years, an increase in the number of cases during tick seasons has been observed in several countries such as russia, south africa, kosovo, and greece. in turkey, where before 2002 no human cchf cases had been observed, a total of 2,508 confirmed cases, including 133 deaths, were reported between 2002 and june 2008. the emergence of cchf has been associated with factors such as climatic features (temperature, humidity, etc.), changes of vector population, geographical conditions, flora, wildlife, and the animal husbandry sector. rvf is a mosquito-borne bunyavirus infection occurring in many parts of sub-saharan africa. it infects primarily sheep, cattle, and goats, and is maintained in nature by transovarial transmission in floodwater aedes mosquitoes. it has been shown that infected eggs remain dormant in the dambos (i.e., depressions) of east africa and hatch after heavy rains and initiate mosquito-livestock-mosquito transmission giving rise to large epizootics. remote sensing via satellite can predict the likelihood of rvf transmission by detecting both the ecological changes associated with heavy rainfall and the depressions from which the floodwater mosquitoes emerge. transmission to humans is also possible from direct and aerosol exposure to blood and amniotic fluids of livestock. most human infections manifest themselves as uncomplicated febrile illness, but severe hemorrhagic disease, encephalitis, or retinal vasculitis is possible. in 1977, rvf has been transported, probably by infected camels to egypt, where it caused major epidemics with several hundred thousand infections of humans. it has been suggested that introduction of rvf may be a risk to other potentially receptive areas such as parts of asia and the americas. floods occurring during the el niño phenomenon of 1997 in east africa subsequently gave rise to large epidemics and further spread to the arabian peninsula. most recent epidemics occurred in 2006 and 2007 following heavy rainfalls in kenya, somalia, and sudan, causing several hundred deaths. besides mosquito control, epidemics are best prevented by vaccination of livestock. leishmaniasis, a protozoal transmitted by sandflies, has shown a sharp increase in the number of recorded cases and spread to new endemic regions over the last decade. presently, 88 countries are affected with an estimated 12 million cases worldwide. there are about 1.5 million new cases of cutaneous and mucocutaneous leishmaniasis, a nonfatal but debilitating disease with 90% of cases occurring in afghanistan, brazil, bolivia, iran, peru, saudi arabia, and syria. the incidence of visceral leishmaniasis (vl), a disease with a high fatality rate when untreated, is estimated at around 500,000 per year. the situation is further aggravated by emerging drug resistance (table 3 .4) and the deadly synergy of vl/hiv coinfection. epidemics usually affect the poorest part of the population and have occurred recently in bangladesh, brazil, india, nepal, and sudan. for many years, the public health impact of the leishmaniases has been grossly underestimated. they seriously hamper socioeconomic progress and epidemics have significantly delayed the implementation of numerous development programs. the spread of leishmaniasis is associated with factors favoring the vector such as deforestation, building of dams, new irrigation schemes, and climate changes, but also with urbanization, migration of nonimmune people to endemic areas, poverty, malnutrition, and the breakdown of public health. antimicrobial resistance of epidemiological relevance has emerged as a major problem in the treatment of many infectious diseases (table 3 .4). resistance is no longer a problem that predominantly affects the chemotherapy of bacterial infections. it became increasingly important in parasitic and fungal diseases, and despite the short history of antiviral chemotherapy, it already plays a prominent role in the treatment of hiv infection and other viral diseases. resistance is also a problem in some of the emerging infections and will further complicate their treatment and control. resistance of bacterial pathogens has become a common feature in nosocomial infections, especially in the icu and in surgical wards. currently, the number one problem in most hospitals is s. aureus resistant to methicillin (mrsa, see chapter 22). however, common problems of resistance also extend to other major bacterial pathogens such as enterococci, various gram-negative enteric bacilli, and pseudomonas species. resistance has developed not only to standard antibiotics (e.g., penicillins, cephalosporins, aminoglycosides, macrolides, or quinolones) but also to second-line antibiotics including carbapenems, glycopeptides, and newer quinolones. however, there is considerable geographic variation. in 2006, the european antimicrobial resistance surveillance system (earss), a network of national surveillance systems, reported vancomycin-resistant rates among enterococci ranging from none in iceland, norway, romania, bulgaria, denmark, and hungary to 42% of enterococcus faecium strains in greece (earss 2006) . a surveillance study conducted in the united states hospitals from 1995 to 2002 showed that 9% of nosocomial bloodstream infections were caused by enterococci and that 2% of e. faecalis isolates and 60% of e. faecium isolates were vancomycin resistant (wisplinghof et al. 2004) . rates and spectrum of antibacterial resistance of e. coli and other gram-negative enteric bacilli may differ considerably from one hospital to the other. in some important pathogens of hospital-related infections such as klebsiella, enterobacter, and pseudomonas species, resistance to almost all available antimicrobials has been observed. this may complicate the choice of an effective initial chemotherapy considerably. therefore, each hospital has to monitor the epidemiological situation of resistance regularly, at least for the most important bacteria causing nosocomial infections, such as staphylococci, enterococci, gram-negative enteric bacilli, and pseudomonas. even in community-acquired infections, there has been a considerable increase in resistance problems. at present, approximately 15% of pneumococcal isolates in the united states are resistant to penicillin, and 20% exhibit intermediate resistance. the rate of resistance is lower in countries that, by tradition, are conservative in their antibiotic use (e.g., netherlands, germany) and higher in countries where use is more liberal (e.g., france). in hong kong and korea, resistance rates approach 80%. in addition, about one-quarter of all pneumococcal isolates in the united states are resistant to macrolides. this rate is even higher in strains highly resistant to penicillin, and increasingly there is multiresistance against other antibiotics such as cephalosporins. the prevalence of meningococci with reduced susceptibility to penicillin has been increasing, and high-level resistance has been reported in some countries (e.g., spain, united kingdom). although high-dose penicillin is effective in infections with strains of intermediate resistance, most national and international guidelines recommend broad-spectrum cephalosporins such as ceftriaxone as first-line drugs. however, in most developing countries, penicillin and chloramphenicol are the only affordable drugs. in recent years, certain strains of community-acquired s. aureus with resistance to methicillin (cmrsa) have been observed which produce a toxin (panton-valentine leukocidin) that is cytolytic to pmns, macrophages, and monocytes, and which are an emerging cause of community-acquired cases and outbreaks of necrotic lesions involving the skin or the mucosa, and in some patients also of necrotic hemorrhagic pneumonia with a high case fatality (vandenesch et al. 2003) . development of resistance is mainly determined by two factors: -the genetic potential of a certain pathogen, i.e., mobile elements such as plasmids, transposons, or bacteriophages, genes coding for resistance, and mutation rate. -the selection pressure caused by the therapeutic or the para-therapeutic application of antimicrobial drugs. in the hospital these factors are supported by -microbial strains that are highly adapted to this environment (e.g., rapid colonization of patients, resistance to disinfectants), -an increasing percentage of patients who are highly susceptible to infections due to old age, multimorbidity, immunosuppression, extended surgery, and invasive procedures, and -the frequent use of broad-spectrum antibiotics or combinations of antimicrobial drugs. another source of resistant bacteria has been identified in mass animal production and the use of antimicrobials as growth promoters (e.g., the glycopeptide avoparcin, the streptogramin virginiamycin) or as mass treatment in the therapy or the prevention of infections. the inadequate use of antimicrobial drugs is also an important factor responsible for the development of resistance in community-acquired infections. this is especially true in developing countries where only a limited spectrum of antibiotics is available, where shortage of drugs often leads to treatment that is underdosed or too short, and where uncontrolled sale and use of antibiotics is commonplace. as a consequence, resistance of gonococci is extremely frequent in southeast asia, and resistance of salmonella typhi, shigella, and campylobacter to standard antibiotics is common. some of the still effective second-line antibiotics have to be given parenterally or are not available because they are too costly. a typical example of the consequences of insufficient chemotherapy due to lack of compliance and/or unavailability of drugs is the alarming increase in multiresistance and extreme resistance in tb (see chapter 16). resistance is also a problem in parasitic diseases such as malaria (see chapter 21), leishmaniasis, or african trypanosomiasis. plasmodium falciparum developed resistance against all major antimalarial drugs as soon as they were used on a broad scale. resistance had contributed significantly to the increase in malaria-associated morbidity and mortality observed in many endemic areas (wongsichranalai et al. 2002) . a recent report on failures of the new artemisinin combination treatment for p. falciparum malaria at the thai-cambodian border supports fears of the development of resistance to this most promising class of drug at present (dondrop et al. 2009 ). resistance against antiviral drugs has developed almost from the beginning of antiviral chemotherapy (table 3 .4). in the treatment of hiv infection, the risk of development of resistance has been drastically reduced by the combination of several drugs with different mechanisms of action (see chapter 18). however, drug resistance remains the achilles' heel of the highly active antiretroviral therapy (haart) and may be at a considerable risk of expanding haart to the developing world. today, we have to realize that as we develop antimicrobial drugs, microbes will develop strategies of counterattack. antimicrobial resistance occurs at an alarming rate among all classes of pathogens. even in rich countries it causes real clinical problems in managing infections that were easily treatable just a few years ago. in life-threatening infections such as sepsis, nosocomial infections, or falciparum malaria, there is a substantial risk that the initial chemotherapy might not be effective. in addition, the delay caused by inadequate treatment might favor transmission to other people and support the spread of resistant pathogens (e.g., multiresistant tb). last but not the least, surveillance and control and the necessity to use expensive second-line drugs or combinations of antimicrobials are enormous cost factors. for developing countries this is a major limitation in the treatment and control of infections caused by resistant agents. so, in many ways, emerging resistance contributes to the emergence of infectious diseases. despite the availability of effective strategies for treatment and prevention, infectious diseases have remained a major cause of morbidity and mortality worldwide. however, the problems associated with infections are due to considerable changes. in industrialized countries the mortality caused by infectious diseases has decreased tremendously during more than 100 years. however, during recent years, both mortality and morbidity associated with infections are increasing again. ironically, this is closely associated with the advances in medicine which have contributed to profound changes in the spectrum of both patients and their infections. advanced age, underlying conditions, and an altered immune response are common features in the seriously infected hospital patient today. immunosuppressive therapy is frequently used to treat neoplastic and inflammatory diseases or to prevent the rejection of transplants. some infections, most notably hiv/aids, cause immunosuppression by itself. in the compromised patient, infections are generally more severe or may be caused by opportunistic pathogens that will not harm the immunocompetent host. antimicrobial treatment is often less effective in these patients and tends to be further complicated by antimicrobial resistance which may manifest itself or develop at a higher frequency in the immunocompromised patient. an increasing percentage of infections are hospital acquired or otherwise health care associated. it is estimated that nosocomial infections affect 1.7 million patients and contribute to approximately 100,000 deaths in us hospitals annually (klevens et al. 2007 ). considering the rising number of elderly and immunocompromised patients, a further increase in severe infections can be predicted. in developing countries, the significance of infectious diseases has remained high for ages and despite the advances in medicine. until now, infections are by far the leading cause of both disability-adjusted life years and life years lost. the reasons are obvious and mostly related to poverty and lack of development causing poor and unhealthy living conditions, inadequate health systems, and lack of resources for prevention and treatment. this is, of course, just an integral part of the general socioeconomic problems of developing countries. however, poor health conditions per se are an important obstacle to development, and infections such as hiv/aids in sub-saharan africa can be a major cause of lack of development, increasing poverty, and political instability. generally, the situation of many developing countries has not improved during the last two decades, and the gap between the first and the third world has increased. however, most of the mortality and morbidity associated with infectious diseases is avoidable. as laid down in the millennium goals, a major task of the world community will be to counteract the imbalance between the industrialized and the developing countries and to find strategies to ensure participation in the progress of modern medicine for all. developing countries also carry the main burden of diseases caused by newly emerging and re-emerging infections (table 3. 2 and 3.3) . however, the consequences of economical and political crises on emerging infectious diseases are obvious in industrialized countries also-such as the return of diphtheria or the increase in tb and multiresistant tb after the breakdown of the former soviet union. today, all countries worldwide are affected by emerging infections as well as by emerging antimicrobial resistance. in the age of globalization, travel and transport of people, animals, and goods of all kinds have increased tremendously. as a consequence, infectious agents may travel over long distances and at high speed. this is clearly evident with influenza pandemics or outbreaks such as the sars epidemic or with imported cases of viral hemorrhagic fever transmissible from person to person. the spread of antimicrobial resistance or the re-emergence of tb seems to be less spectacular, but the consequences may be at least as important in the long run. management and control of emerging and re-emerging infectious diseases can be very different from disease to disease and has to allow for all relevant factors of the populations at risk and of the specific disease including the ecology of the agent, its vectors, and reservoirs. however, some basic principles apply to all situations: -surveillance -information and communication -preemptive planning and preparedness -provision and implementation of • adequate treatment • adequate control and prevention -international cooperation active and passive surveillance systems with rapid reporting and analysis of data are essential for the early detection of outbreaks, changes in epidemiology, and other events of public health concern (see chapters 8 and 9). however, many resourcepoor countries do not have functional surveillance systems. in addition, reporting of infectious diseases may be neglected or delayed because of fears of stigma, international sanctions including trade and travel restrictions, or interference with tourism. classical examples are plague and cholera, but also recent examples such as the bse/vcjd crisis in the united kingdom or sars originating from china showed undue delays between first occurrence of cases and information to the public. although, in outbreaks of new and unknown diseases it may be difficult, or even impossible, to predict or assess the magnitude of the problem and the potential consequences, timely and adequate information and communication is not only obligatory, according to international regulations, but also the best strategy to avoid rumors, misbeliefs, panic, or disregard. in recent years, many countries have installed national plans of action for important epidemiological scenarios and outbreaks such as pandemic influenza, bioterrorism, import of viral hemorrhagic fevers transmissible from person to person, sars, and comparable diseases or outbreaks. all member states of the world health assembly that have so far not been able to install functional surveillance and/or pre-emptive planning are obliged to do so within a maximum of 5 years after their ratification of the new international health regulations (who 2005) . preparedness not only means surveillance and planning but also has to include the provision of facilities to adequately treat and, if necessary, to isolate patients with infectious diseases of public health importance and relevant epidemic potential and/or at risk of transmission to other persons including health-care workers. task forces and high containment facilities for both laboratory diagnostic services and treatment of patients using barrier nursing have been set up in several countries. however, all health facilities of a certain level such as general hospitals should be prepared by their organization and structure to treat patients with infections of public health relevance such as multiresistant tb under appropriate isolation and barrier nursing conditions. this also applies to hospitals in resource-poor countries. adequate training of health-care workers and strict management have been effective to control outbreaks of highly contagious infections within rural african hospitals lacking sophisticated technical equipments (cdc 1998) . strategies for control and prevention may be quite different for various emerging infections. effective vaccinations are available only for some infections and are usually lacking for newly emerging infections (table 3 .5). for the majority of emerging infections, control and prevention have to rely on information, education and exposure prophylaxis, interruption of transmission by vector control and control of reservoir hosts (e.g., rodents), and case finding with early diagnosis and treatment. for diseases and outbreaks caused by infections of public health relevance that are transmissible from person to person, containment procedures including isolation and treatment of patients under condition of barrier nursing as well as tracking and surveillance of contacts are warranted by national and international health regulations. here, international cooperation is essential to successfully contain outbreaks and epidemics such as the sars epidemic in 2003. despite dramatic progress in their treatment and prevention, infectious diseases are still of enormous global significance with tremendous economic and political implications. emerging and re-emerging infectious diseases as well as emerging antimicrobial resistance are major challenges to all countries worldwide. for the management of current and future problems, it will be most important to counteract the imbalance between the industrialized world, new economies, and developing countries, and to adequately and timely react to new threats on a global scale. a new type of papillomavirus dna, its presence in genital cancer and in cell lines derived from genital cancer world health organization: infection control for viral haemorrhagic fevers in the african health care setting nipah virus: a recently emergent deadly paramyxovirus helicobacter and gastric carcinoma. serum antibody prevalence in populations with contrasting cancer risks effects of climate change on the incidence of tick-borne encephalitis in the czech republic in the past two decades artemisinin resistance in plasmodium falciparum malaria identification of a novel coronavirus in patients with severe acute respiratory syndrome a papillomavirus dann from a cervical carcinoma and its prevalence in cancer biopsy samples from different geographic regions susceptibility results for e. faecium isolates lyme-borreliose in einem europäischen endemiegebiet: antikörperprävalenz und klinisches spektrum hantavirus outbreak global trends in emerging infectious diseases hepatitis e virus and chronic hepatitis in organtransplant recipients estimating health care-associated infections and deaths in u.s. hospitals nipah virus encephalitis outbreak in malaysia lyme borreliosis in europe: influences of climate and climate change, epidemiology, ecology and adaptation measures. who regional office for europe altitudinal distribution limit of the tick ixodes ricinus shifted considerably towards higher altitudes in central europe: results of three years monitoring in the krkonose mts epidemiology of helicobacter pylori infection 1999: results of a household-based seroepidemiological survey outbreak of west nile virus infection novel chikungunya virus variant in travelers returning from indian ocean islands isolation of a cdna from the virus responsible for enterically transmitted non-a, non-b hepatitis infection with chikungunya virus in italy: an outbreak in a temperate region waldarbeiter-studie berlin-brandenburg 2000 zu zeckenübertragenen und andere zoonosen risikofaktoren für lyme-borreliose: ergebnisse einer studie in einem brandenburger landkreis übertrifft die infektionszahlen der vorjahre zahl der hantavirus-erkrankungen erreichte 2007 in deutschland einen neuen höchststand prevalence and determinants of helicobacter pylori infection in preschool children: a population-based study from germany hantaviruses: a global disease problem safety and efficacy of a recombinant hepatitis e vaccine tick-borne diseases in the united states the national creutzfeld-jakob disease surveillance unit (ncjdsu) marburg virus infection detected in a common african bat community-acquired methicillin-resistant staphylococcus aureus carrying panton-valentine leukocidin genes: worldwide emergence multiple exposures during a norovirus outbreak on a river-cruise sailing through europe toxin production by an emerging strain of clostridium difficile associated with outbreaks of severe disease in north america and europe prevalence of borrelia burgdorferi antibodies in hamburg blood donors nipah virus outbreaks in bangladesh revision of the international health regulations nosocomial bloodstream infections in us hospitals: analysis of 24 179 cases from a prospective nationwide surveillance study large outbreak of norovirus: the baker who should have known better epidemiology of drugresistant malaria nipah virus infection in bats (order chiroptera) in peninsular malaysia key: cord-018116-99z6ykb2 authors: healing, tim title: surveillance and control of communicable disease in conflicts and disasters date: 2009 journal: conflict and catastrophe medicine doi: 10.1007/978-1-84800-352-1_13 sha: doc_id: 18116 cord_uid: 99z6ykb2 nan tim healing • to describe the principles of health surveillance in conflict and disaster situations • to assist in organizing a health surveillance system in conflict and disaster situations • to describe the principles of control of communicable diseases in conflict and disaster situations • to assist in organizing a response to outbreaks and epidemics • to introduce the challenges associated with health surveillance and communicable diseases in conflict and disaster situations there are five fundamental principles for the control of communicable disease in emergencies: • rapid assessment -identify and quantify the main disease threats to the population and determine the population's health status • prevention -provision of basic health care, shelter, food, water, and sanitation • surveillance -monitor disease trends and detect outbreaks • outbreak control -control outbreaks of disease. involves proper preparedness and rapid response (confirmation, investigation, implementation of controls) • disease management -prompt diagnosis and effective treatment rapid assessment has been dealt with elsewhere in this book as have the prevention aspects of disease control (adequate shelter, clean water, sanitation, and food, together with basic individual health care). this chapter therefore covers surveillance, outbreak/epidemic control, and public health aspects of disease management. the topics are dealt with in general terms. more details can be found in references. disasters, particularly conflicts, by damaging or destroying the infrastructures of societies (health, sanitation, food supply) and by causing displacement of populations, generally lead to increased rates of disease. outbreaks and epidemics are not inevitable in these situations and are relatively rare after rapid-onset natural disasters, but there is a severe increase in the risk of epidemics during and after complex emergencies involving conflict, large-scale population displacement with many persons in camps and food shortages. in most wars more people die from illness than from trauma. preventing such problems, or at least limiting their effects, falls on those responsible for the health care of the population affected by the emergency. they must be able to • assess the health status of the population affected and identify the main health priorities • monitor the development and determine the severity of any health emergency that develops (including monitoring the incidence of and case fatality rates from diseases, receiving early warning of epidemics and monitoring responses) at first sight, undertaking public health activities in emergencies, especially in conflicts, may seem to be difficult or impossible. the destructive nature of warfare may prevent or inhibit the provision of adequate food and shelter, of clean water and sanitation and vaccination programs. despite the difficulties that warfare imposes, it is generally possible to undertake at least limited public health programs, including disease surveillance and control activities. in other types of disaster public health activities may be expected to be less affected by the security situation than in a war (although aid workers may be at risk if populations are severely deprived of resources such as food, shelter, water, or cash), and with limited access and damage to communication systems and other parts of the infrastructure assessment, surveillance and control activities can be severely restricted. for example, following the pakistan earthquake late in 2005 access was severely restricted for some time and the urgent need to treat the injured and provide food and shelter meant that the limited transport available was heavily committed. the surveillance and control of communicable disease require data which can be collected in one of three ways: 1. surveillance systems -covering all or at least a significant proportion of the population 2. surveys -in which data are collected from a small sample of the affected population considered to be representative of the whole 3. outbreak investigations -in-depth investigations designed to identify the cause of deaths or diseases and identify control measures although the latter two can provide valuable information for disease control and form part of the surveillance process, proper control of disease requires regular monitoring of the overall disease situation, which in turn requires the establishment of a properly designed health surveillance system. it is important therefore that responsibility for surveillance activities is defined at the beginning of planning for an aid mission. generally speaking, a team will be required, including a team leader (often an aid agency health coordinator), who should ideally have surveillance experience, clinical workers, a water and sanitation specialist, and representatives of the local health services and communities. the team may also need clerical, logistic, information technology and communications specialists. the world health organization defines health surveillance as "the ongoing systematic collection, analysis and interpretation of data in order to plan, implement and evaluate public health interventions." data for surveillance must be accurate, timely, relevant, representative, and easily analyzed, and the results must be disseminated in a timely manner to all who need to receive them. in addition the data collected, the methods used for collection and the output must be acceptable to those surveyed (health-care professionals and the population). in emergencies the time that can be given to surveillance by medical personnel is likely to be limited and surveillance activities will be far from the minds of most of those involved. therefore the methods used need to be rapid, practical, and consistent, and while the greatest possible accuracy must be achieved, "the best must not be the enemy of the good." it is necessary to strike a balance between collecting large amounts of information ("what we would like to know") and collecting too little which can lead to an ineffective response. those responsible for establishing surveillance programs must therefore try to determine what is really needed ("what we need to know"). it is better to err on the side of too much than of too little. ideally any existing surveillance system should be used. there is no point in establishing a system if one already exists, unless the existing one is inadequate or inappropriate or has broken down irretrievably. surveillance systems for use in conflict and disaster situations should therefore adhere as far as possible to the criteria given in table 13 .1. notes on these criteria: complexity and inflexibility are incompatible with surveillance systems generally and particularly when operating in emergencies where collection of data may be difficult and where situations can change very fast. defining what you "need to know" will allow you to set up the appropriate data collection methods (questionnaires, sites, etc.) and to design the system so that it can obtain and handle the information required. information that is accurate but out of date is useless for immediate disease control purposes and of little value for forward planning. communications therefore form an integral part of any surveillance system. do not try to overreach when setting up a system. for example, expatriate staff may best be used to recruit local staff for the system and in supervisory activities rather than in collecting data. this criterion is certainly a goal to aim for as sustainability must be the target for all aid work. however, there may be situations where an emergency system is needed rapidly and where it cannot readily be integrated into existing systems or be developed as a new long-term system. 6. based on standardized sampling methods the sampling system must use the same data collection methods throughout if data are to be comparable. ideally this should be methods that are internationally agreed and approved. agreement should be sought for the methods from the other agencies on the ground to ensure consistency. without case definitions that are agreed by all parties the likelihood of success of a surveillance system is very low. this is especially so when laboratory support is minimal or absent since clinical case definitions have to be drawn very tightly if different diseases are not to be confused. routine surveillance requires more than material from ad hoc sources. sites such as medical centers (in towns, villages, or refugee camps), hospitals, and/or public health units should be recruited. the more comprehensive the coverage of the system, the more likely is it that the data will be accurate and complete and that problems will not be missed. such coverage can be problematic. the coverage of the different systems that can be used is discussed below. the data collected and the methods used should ideally fit in with systems that are operating or have previously operated in the area. following from criterion 10, if systems are already in existence or in abeyance but revivable then this should be done so as to ensure compliance by local health-care services and continuity of data collection and analysis. existing records are of considerable value for predictive purposes. knowledge of past problems makes it possible to anticipate future trends and problems and allows for early planning decisions. if several health agencies are operating it is essential to ensure collaboration among them in surveillance activities to avoid confusion and duplication of effort. 14. involve collaboration with local services so as to avoid duplication as above, early involvement of local health and surveillance services will reduce workloads and avoid duplication of effort. if those from whom the data are collected, those who are collecting the data, and those who will receive the results are unhappy with the system, the system is unlikely to operate effectively. these criteria can be used to evaluate a plan for a surveillance system and also, with some additions, to evaluate an existing system. however, failure to fulfil all these criteria need not rule out a system. in many emergencies it can be difficult to meet such a wide range of "best case" criteria, and the question that must be asked is whether the proposed system is capable of fulfilling its purposecan it provide sufficiently accurate essential information to those who need it when they need it? the emphasis of an emergency surveillance program may need to be altered as the situation changes especially if a particular item emerges as being of key importance. those running the surveillance program should use the data gathered and a continuous assessment of the general running of the system, to alter the program as required (preferably after consultation with relevant stakeholders). when designing health surveillance systems, it is essential to do the following: the population under surveillance may be relatively small and well defined (such as the population of a refugee camp) or a much less defined group such as mobile groups of refugees or idps or the population of a village, town, or region, the size of whose population may be unknown or may be fluctuating because of a disaster. establishment of denominators may therefore be difficult. even refugees or idp camps may present a challenge as, while the size of the population may appear to be (or actually be) stable, its makeup may vary over time because of movements in and out. if the age or sex makeup of the camp alters, the pattern of disease may also alter. both the number of cases detected and the rate of factors such as morbidity or mortality per unit of population are important values needed to inform emergency programs. those responsible for all aspects of health care need to know what numbers of cases are involved so as to ensure adequate provision of services (amounts of medicines, numbers of hospital beds, etc.). however, simple numbers are of little value in assessing trends and patterns since increases or decreases in numbers of cases (or numbers of deaths) may reflect changes in population size (resulting, for example, from population displacement) rather than a trend due to (for example) a particular disease. in addition, several rates (such as the crude mortality rate) are key indicators in defining health emergencies (see below). knowing the demography of the affected population is therefore important and all agencies working in an emergency should agree on and use the same population figures. the essential demographic data needed include the following: • total population size • population structure -overall sex ratio and the sex ratio in defined age groups -population under 5 years old, with age breakdown (0-4 years) -this group has special needs and is usually a key factor in planning the emergency response -age pyramid -ethnic composition and place of origin -number of vulnerable persons (e.g., pregnant and lactating women, members of female-headed households, unaccompanied children, destitute elderly, disabled and wounded persons) at the outset it is therefore important to establish methods to obtain demographic data. often the best that can be managed initially is a rough estimate, but this can usually be refined later. it is helpful to use several methods and cross-check the figures to obtain the best estimate. surrogates of the whole population (such as those attending a clinic) may be the best that can be achieved early on. the ease with which such data can be obtained usually depends on the size and scale of the population under consideration. the demography of a well-run refugee camp is quite easy to obtain but that of a larger area may be much more difficult. a lack of knowledge of the size of a displaced group can be confounded by a lack of knowledge of the size of the resident population. in many countries with poor infrastructures, accurate census data are not available. in some instances tax records may be helpful if these can be obtained. it should be noted that demographic data, especially if they involve refugees and idps, can be politically sensitive and interested parties may place undue weight on any figures that are given. ideally, communicable disease surveillance should be nationwide (or at least "affected area wide"), drawing information from a range of health-care centers that cover a sufficient proportion of the population to ensure that the great majority of cases (preferably all) of the relevant conditions are reported. a surveillance system in a refugee or idp camp is effectively a miniature comprehensive system as it is possible to cover the whole population. there are situations where comprehensive surveillance is not possible and these often arise in disasters. damaged access and communications and staff shortages frequently mean that only limited numbers of reporting sites (sentinel sites) can be used. as far as possible these should be chosen to ensure a wide coverage of the area and also to maximize the proportion of the population that is covered. sentinel surveillance systems are inherently less satisfactory than comprehensive systems largely because they provide a much less complete coverage. the calculation of rates can sometimes be difficult or impossible; such systems can be very labour intensive, and important events may be missed. both types of system may rely on notification of cases based solely on clinical evidence (and this is the most likely situation in conflicts and disasters at least in the early stages), or may include laboratory verification of some or (preferably) all diagnoses. if there is more than one center involved in establishing the diagnosis (for example, a clinical department, a hospital laboratory, and a reference laboratory) the channels of reporting must be very carefully set up so as to avoid duplicate reporting. surveillance must provide information on key health indicators, which should include the following: the selection of information sought in these categories must be done carefully. it is neither possible nor desirable to monitor everything, especially in the early stages of a disaster response. at that stage (the acute phase) the priority of surveillance is the detection of factors that can have the greatest and most rapid effect on the population. in terms of communicable disease this means diseases that affect large numbers of people and have epidemic potential. in most instances this also means diseases for which effective rapid control measures exist. while gathering data on other largescale disease problems should not be excluded, the main surveillance and control efforts should be aimed where they can do the most immediate good. in the very early stages, only clinical information may be available since laboratory diagnostic services will probably be damaged or simply unavailable. however, this need not be a problem if the medical response is also geared to a syndromic approach. as the situation stabilizes, laboratory support becomes available, and longer term control measures can be supported, the surveillance can become more refined and additional diseases (for example, those which can cause severe morbidity and mortality in the longer term -such as tuberculosis, hiv or aids, and stds) can be added to the list. the main morbidity figures that are routinely sought are as follows: • incidence -the number of new cases of a particular disease reported over a defined period • attack rate (used in outbreaks -usually expressed as percentage) (also called incidence proportion or cumulative incidence) -number of new cases within a specified time period/size of the population initially at risk (×100). (e.g., if 30 per 1,000 persons develop a condition over 2 weeks, the ar/ip/ci is 30/1,000 [3.0%]) • incidence rate -number of new cases per unit of person-time at risk. in the above example, the ir is 15/1,000 person-weeks. (this statistic is useful where the amount of observation time differs between people, or when the population at risk varies with time) • prevalence -the total number of cases of a particular disease recorded in a population at a given time (also called "point prevalence") (nb: prevalence "rate" is the number of cases of a disease at a particular time/population at risk) there are a number of ways of estimating morbidity. health information systems based on health center attendance are the most common but are passive and rely on who presents to the services. other ways of gathering morbidity data include the following: • surveys -in which data are collected from a small sample of the emergencyaffected population deemed to be representative of the whole (or from a particular group for a specific purpose) • outbreak investigations -which entail in-depth investigations designed to identify the cause of deaths or diseases and identify control measures as with disease, changes in numbers of deaths may reflect changes in population size. determination of rates is needed because mortality rate is an important surveillance indicator in an emergency. often the first indication that a problem is developing is an increase in death rate, especially in particular vulnerable groups. all deaths occurring in the community must therefore be recorded. the following indicators can provide the essential information to define the health situation in a population: • crude mortality rate (cmr) is the most important indicator as it indicates the severity of the problem, and changes in cmr show how a medical emergency is developing. cmr is usually expressed as number of deaths per 10,000 persons per day. if the cmr rises above 1/10,000 per day (>2/10,000 per day for young children) an acute emergency is developing and the emergency phase lasts until the daily cmr falls to 1/10,000 per day or below. • age-specific mortality rate (number of deaths in individuals of a specific age due to a specific cause/defined number of individuals of that age/day). in children this is usually given as the number of deaths in children younger and older than 5 years/1,000 children of each age/day). nb: if population data for the under 5s are not available, an estimate of 17% of the total population may be used. • maternal mortality rate. maternal mortality is a sensitive indicator of the effectiveness of health-care systems. a maternal death is usually defined as the death of a woman while pregnant or within 42 days of the termination of the pregnancy (for whatever cause) from any cause related to or aggravated by the pregnancy or its management. the 42-day cut-off is recommended by who but some authorities use a time of up to a year. maternal mortality rate = (number of deaths from puerperal causes in a specified area in a year/number of live births in the area during the same year) × 1,000 (or ×100,000) • cause-specific death rates (case fatality rates -usually given as a percentage). proportion of cases of a specified condition which are fatal within a specified time. case fatality rate = (no. of deaths from given disease in a given period/no. of diagnosed cases of that disease in the same period) × 100 the following indicators must be measured: • prevalence of global acute malnutrition (includes moderate and severe malnutrition) in children 6-59 months of age (or 60-110 cm in height) (percentage of children with weight for height under two standard deviations below the median value in a reference population and/or edema) • prevalence of severe acute malnutrition in children 6-59 months of age (or 60-110 cm in height) (percentage of children with weight for height under three standard deviations below the median value in a reference population and/or edema) • • estimate number of children needing to be cared for in selective feeding programs • estimate number of additional calories per day provided by selective feeding programs immunization programs are a vital part of the public health measures undertaken following disasters. for example, measles vaccination is one of the most important health activities in such situations. the need for campaigns may be assessed on the basis of national vaccination records if they exist. in the absence of such records questioning of mothers may provide the information required, or children or their parents may have written vaccination histories with them (rare). the effectiveness of the programs undertaken can be assessed in defined populations by recording the percentage of children vaccinated. in less well defined populations an assessment of coverage may be made using the numbers of children attending clinics as a surrogate for the population as a whole. items such as water, sanitation, food, and shelter are essential to maintain a healthy population and prevent communicable diseases. depending on the circumstances it may be necessary to monitor these elements in the affected population. indicators such as number of consultations per day, number of vaccinations, number of admissions to hospitals, number of children in feeding programs are typically reported. other factors such as effectiveness of the supply chain, maintenance of the cold chain, and laboratory activities may also be surveyed. activities in related sectors such as water and sanitation, shelter and security may also be included. the major sources of health data will be hospitals and clinics (both national and those established by aid agencies), individual medical practitioners, and other health-care workers. specialized agencies should be able to provide data on particular needs (e.g., food, water, sanitation, and shelter). case definitions are an essential part of surveillance. if the diseases (or syndromes) that are to be covered by the system are not clearly defined, and if the definitions are not adhered to, the results become meaningless -changes from week to week are as likely to be due to changes of definition as to real changes in numbers of cases. this is especially important when laboratory confirmation is not possible. it is therefore important that all agencies working in an emergency agree to and use the same case definitions so that there is consistency in reporting. case definitions must be prepared for each health event or disease or syndrome. if available, the case definitions used by the host country's moh should be used to ensure continuity of data. several different sets of case definitions already exist, either in generalized form (for example, those produced by the centers for disease control in atlanta) or sets prepared for specific emergencies (e.g., the who communicable disease toolkit for the iraq crisis in 2003). standard case definitions may have to be adapted according to the local situation. it should be noted that such case definitions are designed for the purposes of surveillance, not for use in the management of patients, nor are they an indication of intention to treat the patients. when case definitions based purely on clinical observations are used, each case can only be reported as suspected, not confirmed (see table 13 .2). although lacking precision, such definitions can make it possible to establish the occurrence of an outbreak. samples can subsequently be sent to a referral laboratory for confirmation. once samples have been examined and the causative organism has been identified, a more specific case definition can be developed to detect further cases. visits to surveillance sites and discussions with staff involved will help define the recording and data transmission systems required. the great advances in information technology that have been made in recent years have greatly facilitated the collection, recording, transmission, and analysis of surveillance data, but care must be taken that the systems put in place are appropriate. in areas where electricity supplies are problematical and communications poor it may be better to use a paper recording system and verbal data transmission by radio than a computerized system. data verification is essential for the credibility of a surveillance system. those responsible for surveillance systems must ensure good adherence to case definitions if a symptom-based system is in operation and that laboratory quality control systems operate where appropriate. regular assessments of record keeping and the accuracy of data transfer are required. triangulation of results from several sources can sometimes help to detect anomalies. frequency of reporting will usually depend on the severity of the health situation. in general, daily reporting during the acute phase of an emergency will be needed, although in an acute medical emergency (such as a severe cholera outbreak) even more frequent reporting may be necessary, especially if the situation is fluctuating rapidly. the frequency may reduce to (say) weekly as the situation resolves. who is to analyze the data and how it is to be analyzed must be established at the outset. in a relatively defined area such as a camp, a data analysis session may be the last of the daily activities of the person responsible for surveillance. if record keeping and analysis protocols have been carefully worked out initially this task is not necessarily a large additional burden. surveillance systems that cover larger areas and bigger and more diffuse populations usually rely on a central data collection point where designated staff analyze the data. use of such a system requires good data transmission systems. output is as important as input. collecting data without dissemination of results is a sterile exercise and tends rapidly to demotivate those who are collecting the data. there are some important points to consider: • the results of surveillance must be presented in a readily comprehensible form. • surveillance reports should be produced regularly and widely distributed to aid agencies, and to national and international governments and organizations. this will help those involved to understand the overall picture, rather than just that in the area where they are working, and will allow them to take informed decisions about future actions. surveillance systems should be evaluated constantly to ensure that they are working properly, that the data are representative, analysis is appropriate and accurate, and that results are being disseminated to where they are needed. the public health aspects of communicable disease control can be broadly divided into preventive activities (such as vector control and vaccination programs) and the investigation and control of outbreaks and epidemics. experience from many emergencies and disasters has made it possible to identify a number of syndromes or diseases that are most likely to occur in such situations (table 13 .3). this makes it possible to plan activities and interventions on the basis of likely occurrences, even before those involved are present at the scene of the disaster, and to make initial purchases and establish stockpiles of appropriate medicines and equipment. "prevention is better than cure" and proper attention to preventive measures from the earliest stage of the response to the disaster will greatly reduce the risks to the health of the population from infectious disease. a key method of preventing communicable disease is the provision of shelter, adequate amounts of clean water, sufficient safe food, and proper sanitation (latrines and facilities for personal hygiene, clothes washing, and drying). arthropod vectors (mosquitoes, ticks) can be controlled by appropriate spraying programs and also by habitat management (e.g., the removal of places where water can accumulate and mosquitoes breed). provision of bed nets, particularly nets impregnated with insecticide, is effective for reducing infection with agents such as malaria and leishmania. control of rodents, by proper control of rubbish, by rodent proofing food stores, by attention to domestic hygiene and by use of rodenticides, will reduce the risks of transmission of rodent-borne diseases such as plague and lassa fever. medical waste includes laboratory samples, needles and syringes, body tissues, and materials stained with body fluids. this requires careful handling, especially the sharps, as infectious agents such as those causing hepatitis b and c, hiv and aids, and viral hemorrhagic fevers can be transmitted by these materials. used sharps should be disposed of into suitable containers (proper sharps boxes are ideal but old metal containers such as coffee or milk powder tins are adequate). medical waste should ideally be burned in an incinerator. this should be close to the clinic or hospital but downwind of the prevailing wind. a 200-l oil drum can be used for this purpose with a metal grate half way up and a hole at the bottom to allow in air and for the removal of ash. larger-scale and more permanent incinerators can be constructed if necessary. burning pits can be used in emergency. if burning is not possible items should be buried at least 1.5 m deep. this is more suitable than burning for large items of human tissue such as amputated legs. ensure there is no risk of groundwater contamination. a few others, such as malaria and other vector-borne diseases (e.g., typhus and leishmaniasis), are also likely to occur but are region specific. tb and hiv or aids can also cause major problems in the longer term this is a complex process involving not just considerations of infection risk but also legal, sociocultural, and psychological factors. there are a number of specialist publications which can be of help. after almost every natural disaster, fear of disease has encouraged authorities to dispose rapidly of the bodies of the dead, often without identifying them, and this sometimes seems almost to take precedence over dealing with the living. however, in sudden impact disasters (such as the indian ocean tsunami in 2004), the pattern and incidence of disease found in the dead will generally reflect those in the living. the situation is much the same in wars and other long drawn out disasters, although these may affect disease patterns and create vulnerable groups. in fact dead bodies pose little risk to health (with some exceptions listed below) since few pathogenic microorganisms survive long after the death of their host. the diseased living are far more dangerous. the decay of cadavers is due mainly to organisms they already contain and these are not pathogenic. those most at risk are those handling the deceased, not the community. the most likely risks to them are as follows: mortuary facilities may need to be provided where the dead can be preserved until appropriate legal proceedings have been undertaken and where relatives, etc., may easily attend to identify and claim the deceased. cold stores and refrigerated vehicles can be used as temporary mass mortuary facilities. alternatively such facilities can be provided in buildings, huts, or tented structures, but refrigeration will be needed. the dead must always be treated with dignity and respect. as far as possible the appropriate customs of the local population or the group to which the deceased belonged should be observed. if the dead have to be buried in mass graves then the layout of the cemetery must be carefully mapped to facilitate exhumation if needed. when an individual may have died of a particularly dangerous infection, then body bags should be used (and also for damaged cadavers). in general, bodies should be buried rather than cremated (as exhumation for purposes of identification may be needed). bodies should be buried at least 1.5 m deep or, if more shallowly, should have earth piled at least 1 m above the ground level and 0.5 m to each side of the grave (to prevent access by scavengers and burrowing insects). disinfectants such as chloride of lime should not be used. new burial sites should be at least 250 m from drinking water sources and at least 0.7 m above the saturated zone. vaccination programs are an essential part of disease prevention. information about existing vaccination programs must be obtained during the assessment process and this should include information from external assessors (e.g., who, unicef, ngos) as to the effectiveness of the vaccination programs that have been undertaken in the past. it cannot be assumed that simply because children have received vaccines that these vaccines were effective. measles kills large number of children in developing countries and is one of the greatest causes of morbidity and mortality in children in refugee and idp camps. mass vaccination of children between the ages of 6 months and 15 years should be an absolute priority during the first week of activity in humanitarian situations and can be conducted with the distribution of vitamin a. a system for maintaining measles immunization must be established once the target population has been covered adequately in the initial campaign. this is necessary to ensure that children who may have been missed in the original campaign, children reaching the age of 6 months, and children first vaccinated at the age of 6-9 months who must receive a second dose at 9 months of age are all covered. some of the children vaccinated during such a mass campaign may have been vaccinated before. this does not matter and a second dose will have no adverse effect. it is essential to ensure full coverage against measles in the population. other epi vaccinations for children are not generally included in the emergency phase because they can only prevent a minor proportion of the overall morbidity and mortality at that stage. however, should specific outbreaks occur then the appropriate vaccine should be considered as a control measure. vaccination programs require the following: • appropriate types of vaccines. • appropriate amounts of these vaccines. • equipment (needles, syringes, sterilization equipment, sharps disposal). emergency immunization kits, including cold chain equipment, are available from a number of sources, including unicef and some ngos (e.g. msf). • logistics (transport, cold chain). • staff: a vaccination team may be quite large. it must include the following personnel: -a supervisor. -logistics staff. -staff to prepare and administer vaccines. -record keepers. -security staff (to maintain order and control crowds) may also be needed. maintenance of the cold chain is particularly important. this is the system of transporting and storing vaccines within a suitable temperature range from the point of manufacture to the point of administration. the effectiveness of vaccines can be reduced or lost if they are allowed to get too cold, too hot, or are exposed to direct sunlight or fluorescent light. careful note should be taken of the conditions needed to transport different vaccines because these can vary. the essential cold chain equipment needed to transport and store vaccines within a consistent safe temperature range includes the following: • dedicated refrigerators for storing vaccines and freezers for ice packs (fridges and freezers powered by gas or kerosene are available as alternatives to electric machines, and solar-powered fridge/freezer combinations specially designed for vaccine storage are also available) • a suitable thermometer and a chart for recording daily temperature readings if possible, vaccines should be stored in their original packaging because removing the packaging exposes them to room temperature and light. check the temperature to ensure the vaccines have not been exposed to temperatures outside the normal storage ranges for those vaccines (see table 13 .4). max. storage time at the different levels: primary, 6 months; region, 3 months; district, 1 month; health center, 1 month; health post, daily usemax. 1 month diluents must never be frozen. freeze-dried vaccines supplied packed with diluent must be stored between +2 and +8°c. diluents supplied separately should be kept between +2 and +8°c vaccines must be kept at the correct temperature since all are sensitive to heat and cold to some extent. all freeze-dried vaccines become much more heat-sensitive after they have been reconstituted. vaccines sensitive to cold will lose potency if exposed to temperatures lower than optimal for their storage, particularly if they are frozen. some vaccines (bcg, measles, mr, mmr, and rubella vaccines) are also sensitive to strong light and must always be protected against sunlight or fluorescent (neon) light. these vaccines are usually supplied in dark brown glass vials, which give them some protection against light damage, but they must still be covered and protected from strong light at all times. only vaccine stocks that are fit for use should be kept in the vaccine cold chain. expired or heat-damaged vials should be removed from cold storage. if unusable vaccines need to be kept for a period before disposal (e.g., until completion of accounting or auditing procedures) they should be kept outside the cold chain, separated from all usable stocks and carefully labelled to avoid mistaken use. diluents for vaccines are less sensitive to storage temperatures than are the vaccines with which they are used (although they must be kept cool), but may be kept in the cold chain between +2 and +8°c if space permits. however, diluent vials must never be frozen (kept in a freezer or in contact with any frozen surface) as the vial may crack and become contaminated. when vaccines are reconstituted, the diluent should be at same temperature as the vaccine, so sufficient diluent for daily needs should be kept in the cold chain at the point of vaccine use (health center or vaccination post). at other levels of the cold chain (central, provincial, or district stores) it is only necessary to keep any diluent in the cold chain if it is planned to use it within the next 24 h. freeze-dried vaccines and their diluents should always be distributed together in matching quantities. although the diluents do not need to be kept in the cold chain (unless needed for reconstituting vaccines within the next 24 h), they must travel with the vaccine at all times, and must always be of the correct type, and from the same manufacturer as the vaccine that they are accompanying. each vaccine requires a specific diluent, and therefore, diluents are not interchangeable (for example, diluent made for measles vaccine must not be used for reconstituting bcg, yellow fever, or any other type of vaccine). likewise, diluent made by one manufacturer for use with a certain vaccine cannot be used for reconstituting the same type of vaccine produced by another manufacturer. some combination vaccines comprise a freeze-dried component (such as hib) which is designed to be reconstituted by a liquid vaccine (such as dtp or dtp-hepb liquid vaccine) instead of a normal diluent. for such combination vaccines, it is again vital that only vaccines manufactured and licensed for this purpose are combined. note also that for combination vaccines where the diluent is itself a vaccine, all components must now be kept in the cold chain between +2 and +8°c at all times. as for all other freeze-dried vaccines, it is also essential that the "diluent" travels with the vaccine at all times. the effectiveness of a vaccination program will need to be assessed. the program can be evaluated both by routinely collected data and, if necessary, by a survey of vaccination coverage. routine data on coverage is obtained by comparing the numbers vaccinated with the estimated size of the target population (and clearly depends on accurate assessment of the latter). a coverage survey requires the use of a statistical technique called a two-stage cluster survey details of which can be found in the appropriate who/epi documents. information about the effectiveness of the campaign should be obtained from routine surveillance of communicable disease. if, for example, large number of measles cases continue to occur, or there is an outbreak, then data on coverage should be reexamined. if this is shown to be good (over 90%) then the efficacy of the vaccine must be suspected. if the field efficacy is below the theoretical value 85% (for measles vaccine -data on efficacy of other vaccines can be obtained online) then possible causes of a breakdown in the vaccination program must be investigated (failure of the cold chain, poorly respected vaccination schedule). methods for measuring vaccine efficacy can be found in the who/epi literature. mass chemoprophylaxis for bacterial infections such as cholera and meningitis is not usually recommended except on a small scale (for example, the use of rifampicin may be considered to prevent the spread of meningococcal meningitis among immediate contacts of a case), but the difficulties of overseeing such activities and the risks of the development of antibiotic resistance outweigh any benefits that might be gained. the use of chemoprophylaxis for malaria must be undertaken with care. it may be indicated for vulnerable groups of refugees/idps (for example, children and pregnant women) arriving in an endemic area, particularly if they come from a nonmalarious area, but care must be taken to provide drugs to which the local strains of malaria are sensitive. the spread of resistance means that many of the standard drugs are ineffective and the replacements are both costly and may have unwanted side effects. public health education and information activities play a vital role in disease prevention. vaccination programs will not work unless there is acceptance by the public of the necessity for such programs. individuals must be informed as to why these programs are necessary and also where and when they need to take their children for vaccination. such activities are also essential to inform people about particular health programs (for example, feeding programs or vector control programs) and about the steps they can take to protect their health and that of their families (e.g., good hygiene). information can be propagated in many ways: staff who are trained in this type of activity therefore play a key role in disease prevention. heath education also requires transport and equipment (such as video or film projectors, screens, generators, blackboards, etc.). details of the treatment of individuals for various infectious diseases and the facilities needed are covered elsewhere in this book and in many textbooks covering disasters and disease response. in terms of the population aspects of the treatment of disease, important requirements are to ensure that there are • appropriate laboratories (microbiological, parasitological, hematological, biochemical) available to confirm diagnoses and monitor treatment. • adequate supplies of appropriate antimicrobial agents available and the facilities to transport these, store, and distribute them under appropriate conditions (e.g., controlled temperature), together with relevant instruction for use. the provision of laboratory facilities in emergencies is usually limited to basic tests such as those for malaria. more advanced tests, including identification of microorganisms and the determination of antimicrobial sensitivities, require more sophisticated facilities. these may be available in the affected country but are unlikely to be operating in the disaster-affected area. it is more likely that specimens will have to be transported to laboratories abroad. collection of specimens requires appropriate equipment. this will include items such as swabs, transport media, needles, syringes, or vacum sampling systems for blood sampling, different blood collection bottles (with and without anticoagulants) and other sterile specimen tubes, and containers for faeces and urine. transporting specimens must be done safely, and packing specimens for shipment requiring specially trained personnel. treatment of disease requires good supplies of appropriate antimicrobial agents. it is important to ensure that the agents chosen are suitable for use in the area. it is common for doctors in affected areas to ask for the latest therapeutic agents. however, these agents, although effective, are often expensive and not part of the normal treatment programs in the region. the local doctors may not therefore be familiar with the use of these agents, nor may laboratories be capable of monitoring their use. it is better to use funds, which are often limited, to supply larger amounts of older (generic) agents. one caveat is the possibility that regular use may have allowed resistance to certain agents to develop in a country. data on this may be available from local surveillance records. antimicrobials should always be supplied with relevant guidelines in a language that can be understood locally. if local laboratories are unable to test microbes for resistance to antimicrobials, isolates or specimens should be sent as soon as possible to appropriate reference laboratories for testing. outbreaks of communicable disease may occur before preventive measures can take effect or because the measures are in some way inadequate or fail. an epidemic is generally defined as the occurrence in a population or region of a number of cases of a given disease in excess of normal expectancy. an outbreak is an epidemic limited to a small area (a town, village, or camp). the term alert threshold is used to define the point at which the possibility of an epidemic or outbreak needs to be considered and preparedness checked. the areas where vaccination campaigns are a priority need to be identified and campaigns started. the term epidemic (outbreak) threshold is used to define the point at which an urgent response is required. this will vary depending upon the disease involved (infectiousness, local endemicity, transmission mechanisms) and can be as low as a single case. infections where a single case represents a potential outbreak include the following: infections where the threshold is set higher, usually based on long-term collection of data, and will vary from location to location, include the following: • human african trypanosomiasis • visceral leishmaniasis a surveillance system that is functioning well should pick up the signs that an outbreak or epidemic is developing and should therefore allow time for measures to be introduced that will prevent or limit the scale of the event. however, this may not always work and it is essential therefore that plans are made to combat outbreaks or epidemics. in addition to the establishment of surveillance, outbreak preparation involves the following: • preparing an epidemic/outbreak response plan for different diseases covering the resources needed, the types of staff and their skills that may be needed and defining specific control measures. • ensuring that standard treatment protocols are available to all health facilities and health workers and that staff are properly trained. • stockpiling essential supplies. this includes supplies for treatment, for taking and shipping samples, other items to restock existing health facilities and the means to provide emergency health facilities if required. • identifying appropriate laboratories to confirm cases and support patient management, make arrangements for these laboratories to accept and test specimens in an emergency, and set up a system to ship specimens to the laboratory. • identifying emergency sources of vaccines for vaccine-preventable diseases and make arrangements for emergency purchase and shipment. ensure that vaccination supplies (needles, syringes, etc.) are adequate. make sure the cold chain can be maintained. • identifying sources for other supplies, including antimicrobials, and make arrangements for emergency purchase and shipment. if the number of reported cases is rising, is this in excess of the expected number? ideally work with rates rather than numbers (see above) because (for example) the number of cases in a refugee camp could increase if the number of people in the camp increases without an outbreak occurring. verify the diagnosis (laboratory confirmation) and search for links between cases (time and place). laboratory confirmation requires the collection of appropriate specimens and their transport to an appropriate laboratory. in the case of a limited outbreak this team should be set up by the lead agency with membership from other relevant organizations, including moh, who, other un organizations, ngos, etc. in the case of an epidemic the moh will probably take the lead or may ask who or another un agency to do so. the team will need to include a coordinator, and specialists from the various disciplines needed to control the outbreak. this may include health workers, laboratory staff, water and sanitation, vector control, and health education specialists, representatives of the moh or other local health authorities, representatives of local utilities (e.g., water supply), representatives of the police and/or military, and representatives of the local community. this team should meet at least once a day to review the situation and define the necessary responses. it has additional responsibilities, including implementing the response plan, overseeing the daily activities of the responders, ensuring that treatment protocols are followed, identifying resources (both material and human) to manage the outbreak and obtaining these as necessary, and coordinating with local, national, and international authorities as required. the team should also act as the point of contact for the media. a media liaison officer should be appointed and all media contact should be through this individual. this will allow team members to refer media representatives to a central point and reduce interference with their activities. it will also ensure that a consistent message based on the most complete data is given to the media. the appropriate national authorities should be informed of the outbreak. in addition to their responsibilities to their own population and to any refugees within their borders, they have a responsibility under the revised international health regulations (2005) to report outbreaks of certain diseases. these include four diseases regarded as public-health emergencies of international concern: • smallpox • polio (wild-type) in some cases, member states must report outbreaks of additional diseases: cholera, pneumonic plague, yellow fever, viral hemorrhagic fever, and west nile fever, and other diseases that are of special national or regional concern (e.g., dengue fever, rift valley fever, and meningococcal disease). once the diagnosis has been confirmed and the causative organism identified, then there are a number of steps that must be taken in addition to continuing to treat those affected: • produce a case definition for the outbreak. this is primarily a surveillance tool that will reduce the inclusion of cases that are not part of the outbreak and prevent dilution of the focus and activities of the main control effort. • collect and analyze descriptive data by time, person, and place (time and date of onset, individual characteristics of those affected -age, sex, occupation, etc., location of cases). plot the distribution of the cases on a map (can help locate source(s) of an outbreak and determine spread) and plot outbreak curves (which will help estimates of how the outbreak is evolving). • determine the population that is at risk. • determine the number of cases and the size of the affected population. calculate the attack rate. • formulate hypotheses for the pathogen about the possible source and routes of transmission. • conduct detailed epidemiological investigations to identify modes of transmission, vectors/carriers, risk factors). • report results and make recommendations for action. the two main statistical tools used to investigate outbreaks are as follows: • case-control studies in which the frequency of an attribute of the disease in individuals with the disease is compared to the same attribute in individuals without the disease matched in terms of age, sex, and location (the control group) • cohort studies in which the frequency of attributes of a disease is compared in members of a group (for example, those using a particular feeding center) who do or do not show symptoms however the design and methods involved in such studies are often too complex for the austere environment of conflict and disaster. • implement prevention and control measures specific to the disease organism (e.g., clean water, personal hygiene for diarrheal disease) • prevent infection (e.g., by vaccination programs) • prevent exposure (e.g., isolate cases or at the least provide a special treatment ward or wards) • evaluate the outbreak detection and response -were they appropriate, timely, and effective? • change/modify policies and preparedness to deal with outbreaks if required • what activities are needed to prevent similar outbreaks in the future (e.g., improved vaccination programs, new water treatment facilities, public health education, etc.)? • produce and disseminate an outbreak report. the report should include details of the outbreak, including the following: -cause -duration, location, and persons involved -cumulative attack rate (number of cases/exposed population) -incidence rate -case fatality rate -vaccine efficacy (if relevant) (no. of unvaccinated ill − no. of vaccinated ill/no. of unvaccinated ill) -proportion of vaccine-preventable cases (no. of vaccine-preventable cases/no. of cases) -recommendations this is an easy-to-use tool which is of great value for handling epidemiological data and for organizing study designs and results, which can be downloaded free of charge from the internet. it is produced by the centers for disease control (atlanta) and is a series of microcomputer programs which can be used both for surveillance and for outbreak investigation and includes features used by epidemiologists in statistical programs, such as sas or spss, and database programs such as dbase. public health action in emergencies caused by epidemics. geneva: who, 1986. cdc atlanta. case definitions for infectious conditions under public health surveillance updated guidelines for evaluating public health surveillance systems epidemiology for the uninitiated communicable disease control in emergencies -a field manual last jm (ed). dictionary of epidemiology medicins sans frontieres. refugee health -an approach to emergency situations geneva: international committee of the red cross sphere project. humanitarian charter and minimum standards in disaster response. geneva: the sphere project key: cord-103291-nqn1qzcu authors: chapman, lloyd a. c.; spencer, simon e. f.; pollington, timothy m.; jewell, chris p.; mondal, dinesh; alvar, jorge; hollingsworth, t. deirdre; cameron, mary m.; bern, caryn; medley, graham f. title: inferring transmission trees to guide targeting of interventions against visceral leishmaniasis and post-kala-azar dermal leishmaniasis date: 2020-02-25 journal: nan doi: 10.1101/2020.02.24.20023325 sha: doc_id: 103291 cord_uid: nqn1qzcu understanding of spatiotemporal transmission of infectious diseases has improved significantly in recent years. advances in bayesian inference methods for individual-level geo-located epidemiological data have enabled reconstruction of transmission trees and quantification of disease spread in space and time, while accounting for uncertainty in missing data. however, these methods have rarely been applied to endemic diseases or ones in which asymptomatic infection plays a role, for which novel estimation methods are required. here, we develop such methods to analyse longitudinal incidence data on visceral leishmaniasis (vl), and its sequela, post-kala-azar dermal leishmaniasis (pkdl), in a highly endemic community in bangladesh. incorporating recent data on infectiousness of vl and pkdl, we show that while vl cases drive transmission when incidence is high, the contribution of pkdl increases significantly as vl incidence declines (reaching 55% in this setting). transmission is highly focal: >85% of mean distances from inferred infectors to their secondary vl cases were <300m, and estimated average times from infector onset to secondary case infection were <4 months for 90% of vl infectors, but up to 2.75yrs for pkdl infectors. estimated numbers of secondary vl cases per vl and pkdl case varied from 0-6 and were strongly correlated with the infector's duration of symptoms. counterfactual simulations suggest that prevention of pkdl could have reduced vl incidence by up to a quarter. these results highlight the need for prompt detection and treatment of pkdl to achieve vl elimination in the indian subcontinent and provide quantitative estimates to guide spatiotemporally-targeted interventions against vl. . pkdl has therefore been recognised as a major 49 potential threat to the vl elimination programme in the isc 50 (10), which has led to increased active pkdl case detection. 51 nevertheless, the contribution of pkdl to transmission in 52 field settings still urgently needs to be quantified. 53 although the incidence of asymptomatic infection is 4 to 54 17 times higher than that of symptomatic infection in the 55 isc (21), the extent to which asymptomatic individuals con(fig. 1a) . the data from this study are fully de-105 scribed elsewhere (8, 31) . briefly, month of onset of symptoms, 106 treatment, relapse, and relapse treatment were recorded for 107 vl cases and pkdl cases with onset between 2002 and 2010 108 (retrospectively for cases with onset before 2007), and year of 109 onset was recorded for vl cases with onset before 2002. there 110 were 1018 vl cases and 190 pkdl cases with onset between 111 january 2002 and december 2010 in the study area, and 413 112 vl cases with onset prior to january 2002. 113 over the whole study area, vl incidence followed an epi-114 demic wave, increasing from approximately 40 cases/10,000/yr 115 in 2002 to ≥90 cases/10,000/yr in 2005 before declining to 116 <5 cases/10,000/yr in 2010 (fig. 1b) . pkdl incidence fol-117 lowed a similar pattern but lagging vl incidence by roughly 118 2yrs, peaking at 30 cases/10,000/yr in 2007. however, vl 119 and pkdl incidence varied considerably across paras (aver-120 age para-level incidences: vl 18-124 cases/10,000/yr, pkdl 121 0-31 cases/10,000/yr, table s5 ) and time (range of annual 122 para-level incidences: vl 0-414 cases/10,000/yr, pkdl 0-120 123 cases/10,000/yr, fig. s15 ). ú ci = credible interval, calculated as the 95% highest posterior density interval † risk of subsequent vl/asymptomatic infection if susceptible ‡ based on assumed infectiousness § in the absence of background transmission and relative to living directly outside the case household. based on the relative infectiousness of vl and the di erent 151 types of pkdl from the xenodiagnostic data, in the absence 152 of any other sources of transmission, the estimated probability 153 of being infected and developing vl if living in the same 154 household as a single symptomatic individual for 1 month 155 following their onset was 0.018 (95% ci: 0.013, 0.024) for vl 156 and ranged from 0.009 to 0.023 (95% cis: (0.007,0.013)-(0.018, 157 0.031)) for macular/papular pkdl to nodular pkdl. living 158 in the same household as a single asymptomatic individual, 159 the monthly risk of vl was only 0.00037 (95% ci: 0.00027, 160 0.00049), if asymptomatic individuals are 2% as infectious as 161 vl cases. 162 the risk of infection if living in the same household as an 163 infectious individual was estimated to be more than 10 times 164 higher than that if living directly outside the household of an 165 infectious individual (hazard ratio = 12.0), with a 95% ci 166 well above 1 (8.3, 16.7). the estimated spatial kernel (fig. 167 s16 ) around each infectious individual shows a relatively rapid 168 decay in risk with distance outside their household, the risk of 169 infection halving over a distance of 84m (95% ci: 71, 99m). mission. we assess the contribution of di erent infectious 172 groups to transmission in terms of their relative contribu-173 tion to the transmission experienced by susceptible individuals 174 ( fig. 2a and fig. s17 ). the contribution of vl cases was 175 fairly stable at around 75% from 2002 to the end of 2004 176 before decreasing steadily to 0 at the end of the epidemic, 177 while the contribution of pkdl cases increased from 0 in 178 2002 to ≥73% in 2010 (95% ci: 63, 80%) (fig. s17) . only a 179 small proportion of the total infection pressure on susceptible 180 individuals, varying between 9% and 14% over the course of 181 the epidemic, was estimated to have come from asymptomatic 182 and pre-symptomatic individuals. reconstructing the transmission tree. by sampling 1,000 193 transmission trees from the joint posterior distribution of 194 the transmission parameters and the unobserved data (as de-195 scribed in materials and methods), we can build a picture of 196 the most likely source of infection for each case and how infec-197 tion spread in space and time. fig. 3 shows the transmission 198 tree at di erent points in time in part of the south-east cluster 199 of villages. early in the epidemic and at its peak (figures 3a 200 and 3b), most new infections were due to vl cases. towards 201 the end of the epidemic, some infections were most likely due 202 to pkdl cases and there was some saturation of infection 203 around vl cases (fig. 3c) . the inferred patterns of trans-204 mission suggest that disease did not spread radially outward 205 from index cases over time, but instead made a combination 206 of short and long jumps around cases with long durations of 207 symptoms and households with multiple cases. . arrows show the most likely source of infection for each case infected up to that point in time over 1,000 sampled transmission trees, and are coloured by the type of infection source and shaded according to the proportion of trees in which that individual was the most likely infector (darker shading indicating a higher proportion). asymptomatic infections are not shown for clarity. s/a = susceptible or asymptomatic, e = pre-symptomatic, i = vl, r = recovered, d = dormantly infected, p = pkdl (see si text). gps locations of individuals are jittered slightly so that individuals from the same household are more visible. an animated version showing all months is provided in si movie 1. there is considerable heterogeneity in the estimated contriby each vl/pkdl case is typically less than 1 (fig. s19a ). the times after onset of symptoms in the infector at which 336 secondary vl cases become infected are typically longer for 337 pkdl infectors than for vl infectors (fig. 4b) detected after a longer delay than subsequent cases and there 356 will be some delay in mounting a reactive intervention, such 357 as active case detection and/or targeted irs around the index 358 case(s), interventions will need to be applied in a large radius 359 (up to 500m) around index cases to be confident of capturing 360 all secondary cases and limiting transmission. our results demonstrate the importance of accounting for 362 spatial clustering of infection and disease when modelling 363 vl transmission. previous vl transmission dynamic models 364 (23, 41-43) have significantly overestimated the relative con-365 tribution of asymptomatic infection to transmission (as up 366 to 80%), despite assuming asymptomatic individuals are only 367 1-3% as infectious as vl cases, by treating the population 368 as homogeneously mixing, such that all asymptomatic indi-369 viduals can infect all susceptible individuals via sandflies. in 370 reality, asymptomatic individuals do not mix homogeneously 371 with susceptible individuals as they are generally clustered 372 together around or near to vl cases (25, 28), who are much 373 more infectious and therefore more likely to infect suscepti-374 ble individuals around them, even if they are outnumbered 375 by asymptomatic individuals. asymptomatic infection also 376 leads to immunity, and therefore local depletion of suscep-377 tible individuals around infectious individuals. hence, for 378 the same relative infectiousness, the contribution of asymp-379 tomatic individuals to transmission is much lower when spatial 380 heterogeneity is taken into account. nonetheless, our results suggest that asymptomatic indi-382 viduals do contribute a small amount to transmission and 383 that they can "bridge" gaps between vl cases in transmission 384 chains, as the best-fitting model has non-zero asymptomatic 385 relative infectiousness. superficially, this appears to conflict 386 with preliminary results of xenodiagnosis studies in which 387 asymptomatic individuals have failed to infect sandflies ac-388 cording to microscopy (44). however, historical (12, 45) and 389 experimental (46) data show that provision of a second blood 390 meal and optimal timing of sand fly examination are criti-391 cal to maximizing sensitivity of xenodiagnosis. these data 392 suggest that recent xenodiagnosis studies (11, 44), in which 393 dissection occurred within 5 days of a single blood meal, may 394 underestimate the potential infectiousness of symptomatic 395 and asymptomatic infected individuals. occurrence of vl in 396 isolated regions where there are asymptomatically infected 397 individuals, but virtually no reported vl cases (27, 47), also 398 seems to suggest that asymptomatic individuals can generate 399 vl cases. however, it is possible that some individuals who de-400 veloped vl during the study went undiagnosed and untreated, 401 and that we have inferred transmissions from asymptomatic 402 individuals in locations where cases were missed. we will in-403 vestigate the potential role of under-reporting in future work. 404 the analysis presented here is not without limitations. as 405 can be seen from the model simulations (fig. s20) , the model is 406 not able to capture the full spatiotemporal heterogeneity in the 407 observed vl incidence when fitted to the data from the whole 408 study area, as it underestimates the number of cases in higher-409 incidence paras (e.g. paras 1, 4 and 12). there are various 410 possible reasons why the incidence in these paras might have 411 been higher, including higher sandfly density, lower initial lev-412 els of immunity, variation in infectiousness between cases and 413 within individuals over time, dose-dependence in transmission 414 (whereby flies infected by vl cases are more likely to create 415 vl cases than flies infected by asymptomatic individuals (22)), 416 where k(d) = e ≠d/is the spatial kernel function that determines 7 . cc-by 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . bangladesh (protocol #2007-003) and the centers for disease con • recovered (i.e. treated for primary vl, vl relapse or pkdl, or self-resolved from pkdl, or recovered from asymptomatic upon infection, individuals either develop pre-symptomatic infection with probability pi or asymptomatic infection with probability 1 ≠ pi (see table s2 for values of fixed parameters used in the model 2 of 37 . cc-by 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. whereis the rate constant for spatial transmission between infected and susceptible individuals; k(dij) is the spatial kernel function that scales the transmission rate by the distance dij between individuals i and j; " (ø 0) is a rate constant for additional within-household transmission; 1ij is an indicator function for individuals living in the same household, i.e. between an exponentially decaying spatial kernel and a cauchy-type kernel in our previous study (1) (the exponential kernel 73 gave a marginally better fit), we use the exponential kernel here: and asymptomatic individuals, we take the relative infectiousness of pre-symptomatic individuals, h0, to be the same as that of 93 asymptomatic individuals (i.e. h0 = h4). one-hundred and thirty-eight of the 190 pkdl cases underwent one or more examinations by a trained physician to 95 determine the type and extent of their lesions (table s1 ). data from a recent xenodiagnosis study in bangladesh (20) pre-symptomatic -thus, individual j's infectiousness at time t is given by [3] 113 incubation period. following previous work (1), we model the incubation period as negative binomially distributed nb(r, p) 114 with fixed shape parameter r = 3 and 'success' probability parameter p, and support starting at 1 (such that the minimum 115 incubation period is 1 month): we estimate p in the mcmc algorithm for inferring the model parameters and missing data (see below). vl onset-to-treatment time distribution. several vl cases with onset before 2002 have missing symptom onset and/or treatment times (only their onset year is recorded), and may therefore have been infectious at the start of the study period. in order to be able to infer the onset-to-treatment times of these cases, ot õ j = min(r õ j , d õ j ) ≠ i õ j (j = 1, . . . , ni 0 ), in the mcmc algorithm 121 (see below) we model the onset-to-treatment time distribution as a negative binomial distribution nb(r1, p1) and fit to the 122 onset-to-treatment times of all vl cases for whom both onset and treatment times were recorded ( figure s2a ): 124 to obtain r1 = 1.34 and p1 = 0.38 (corresponding to a mean onset-to-treatment time of 3.2 months). assume that all cases not recorded as having immediate recurrence of symptoms su ered treatment relapse and that the time to relapse follows a geometric distribution geom(p4) with pmf: where fitting to the recorded gaps gives p4 = 0.13 (corresponding to a mean time to relapse of 7.9 months). relapse cases are 156 assumed to be uninfectious from their treatment month to their relapse time and their duration of symptoms upon relapse 157 is assumed to follow the same distribution as the onset-to-treatment time for a first vl episode (eq. (5)). we assume all 158 relapse cases were treated for relapse before the end of the study, since the latest treatment time for primary vl in a case that 168 169 while the probabilities of pre-symptomatic or asymptomatic infection in month t given susceptibility up to month t ≠ 1 are, respectively: [12] model for initial status of non-symptomatic individuals. as there was transmission and vl in the population before the start the probabilities of each non-symptomatic individual initially present (i.e. with vj = 0) being susceptible, asymptomatically infected, or recovered from asymptomatic infection at time t = 0 can then be found by calculating the probability of avoiding infection in every month from their birth to the start of the study, summing over the probabilities of being infected in one of the months between their birth and the start of the study and recovering after the start of the study, and summing over the 6 of 37 probabilities of being infected in a month before the start of the study and recovering before the start of the study, respectively: ps 0 (aj) := p(aj > 0, rj > 0) = e ≠⁄ 0 a j [13] pa 0 (aj) := p(aj ae 0, where aj is the age of individual j in months at t = 0. since we assume that non-symptomatic individuals who are born, or 183 who immigrate into the study area, after the start of the study (with vj > 0) are susceptible, for notational convenience we 184 define the probabilities for these individuals as ps 0 (aj) = 1, pa 0 (aj) = pr 0 (aj) = 0. we estimate the historical asymptomatic infection rate, ⁄0, by fitting the model to age-prevalence data on leishmanin skin 186 test (lst) positivity amongst non-symptomatic individuals from a cross-sectional survey of three of the study paras conducted 187 in 2002 (28) (see figure s4 ). we assume that entering state r corresponds to becoming lst-positive, as lst positivity is 188 a marker for durable, protective cell-mediated immunity against vl (28, 29), and estimate ⁄0 by maximising the binomial with these definitions, the complete data likelihood for the augmented data z = (y, x) given the model parameters ◊ = (-, -, ', ", p) is composed of the products of the probabilities of all the di erent individual-level events over all months: the joint posterior distribution of the model parameters ◊ = (-, -, ', ", p) and the missing data x given the observed data y [17] 223 we do this using a metropolis-within-gibbs mcmc data augmentation algorithm in which we iterate between sampling from 224 the conditional posterior distribution of the parameters given the observed data and the current value of the missing data, t=v j qj(t) + qj(t + 1) is a normalising constant to account for the fact that we know that j was not preand ", which are non-negative, since there is little information available with which to construct informative priors (table s3) . the mean of the prior distribution foris chosen as 100m based on our previous findings (1) . a beta distribution, beta(a, b) , 266 is chosen as a conjugate prior for the incubation period parameter p, since it is a probability (p oe where -= (-, -, ', "), so p can be updated e ciently in the mcmc by drawing from this full conditional distribution rather than using a random walk metropolis-hastings update. 1 ≠ aj,0) for rj,0 = t + 1. by repeating the following steps. note that throughout the following we suppress notation of conditional dependencies in the 320 likelihood terms where they are obvious to maintain legibility. the algorithm also accounts for the fact that some individuals 321 were born or migrated or died during the study when updating the unknown pre-symptomatic infection times and asymptomatic 322 infection and recovery times (using the birth/migration/death times as bounds on the proposed unobserved times), but we 323 omit these details from the following description for simplicity. (b) accept the infection time move with probability where (c) i. if aj = 0: if rj = 0, if rj > 0. ii. if aj = t + 1: a. if a õ j = t + 1, then r õ j = t + 1, so accept immediately as the likelihood does not change. step 4(c)ic, except with . 365 iii. if aj oe [1, t ]: a. if a õ j = 0, follow step 4(c)ia, but with q replaced by step 4(c)ib except with step 4(c)ic but with 6. update missing treatment times of vl cases during the study: 382 update the treatment time of the vl case whose treatment time is missing but whose onset time is known, conditional 383 on the treatment time being before their pkdl onset: 384 (a) propose a new treatment time as update the onset and treatment times of all cases who potentially had active vl at the start of the study (t = 1) who n(0, 4) ) " = 0 p1) . update the treatment times of cases who potentially had active vl at the start of the study whose treatment times were 390 not recorded but whose onset times are known, one by one. for each case j: p1) . 9. update whole relapse period of cases missing both relapse and relapse treatment times: 392 update the relapse and relapse treatment times of all vl cases who su ered relapse during the study who are missing step 4 in the above algorithm may appear complicated, but essentially consists of proposing a new asymptomatic infection is the empirical covariance of the last k ≠ f (k) + 1 samples offrom the chain, with the mean of the last k ≠ f (k) + 1 samples; 0 is the initial guess for the covariance matrix, and k0 determines the rate at 442 which the influence of 0 on k+1 decreases (the weight of 0 halves after the first 2k0 iterations). we use k0 = 1000 here. if f (k) = f (k ≠ 1) (i.e. if k is odd with f (k) chosen as above), an additional observation is added to the estimate of the 444 covariance matrix if k is even), the new observation replaces the oldest it has been shown that n (k , 2.38 2 /n -), where is the covariance matrix of the posterior distribution, is the optimal 453 proposal distribution for rapid convergence and e cient mixing of the mcmc chain for symmetric product-form posterior 454 distributions as nae oe, and leads to an acceptance rate of 23.4% (38, 39) . this corresponds to a scaling of c k = 1 in eq. (23). however, we are in a context with a large amount of missing data, which is strongly correlated with some of the transmission 456 parameters (see parameter estimates below), so the posterior distribution is not symmetric, and this scaling is not optimal. 457 we therefore follow (36) and scale c k adaptively as the algorithm progresses to target an acceptance rate of approximately 458 23.4% for updates to -. we do this by rescaling c k by a factor of x k > 1 every time an acceptance occurs and by a factor of 459 x ‹/(‹≠1) k < 1 every time a rejection occurs such that the acceptance rate ‹ approaches 23.4% in the long run, if proposal is rejected. in order to satisfy the 'diminishing adaptation' condition (40), which is necessary to ensure the markov chain is ergodic and converges to the correct posterior distribution, it is required that c k tends to a constant as k ae oe. so that the adaptation 463 diminishes as k increases, we use the sequence 465 where m0 is the number of iterations over which the scaling factor x k decreases from 2 to 1.5. here, we use m0 = 100. model comparison 467 we compare the goodness of fit of models with di erent asymptomatic and pre-symptomatic relative infectiousness (between 0% and 2% of that of vl cases), with and without additional within-household transmission, to test di erent assumptions about how infectious asymptomatic and pre-symptomatic individuals are, using dic (41). dic measures the trade-o between model fit and complexity and lower values indicate better fit. since some variables were not observed, we use a version of dic appropriate for missing data from (42), which is based on the complete data likelihood l(◊; z) = p(y, x|◊). this is equivalent to the standard version of dic for fully observed data except that it is averaged over the missing data: where d(◊) is the deviance (the measure of model fit), given (up to an additive constant dependent only on the data) by . [26] the relative contribution of state x to the infection pressure on the ith vl case at their infection time, i.e. the probability 483 that i's infection source is x (fig. 2b in the main text), is: , x oe {a, e, i, p}. [27] the probability that the ith vl case is infected from the background transmission is ' ⁄i(ei ≠ 1) . reconstructing the epidemic reconstructing the transmission tree. we reconstruct the transmission tree following the 'sequential approach' described 488 in (44). we draw n samples (◊ k , x k ) (k = 1, . . . , n) from the joint posterior distribution from the mcmc, calculate the 489 probability that infectee i was infected by individual j conditional on their infection time ei and uncertainty in the parameter values and missing data (over the posterior distribution). we use n = 1000 here. calculating transmission distances and times. the mean infector-to-vl-infectee distance and mean infector-onset-to-vlinfectee-infection time for each vl and pkdl infector ( figures 4a and 4b in the main text) are calculated from the sample of n transmission trees by averaging the distances and times from each infector to their vl infectees within each tree, and then averaging these quantities over all the trees in which that vl/pkdl case is an infector: where · [33] the absolute contribution of each infectious state to the e ective reproduction number at time t is: where x oe {a, i} denotes the infectious state, and, as described above, in the main text we split the numbers of secondary 517 infections (rj) arising from vl and pkdl for cases that had both. to assess the fit of the model and simulate hypothetical interventions against pkdl, we create a stochastic simulation version of 520 the individual-level spatiotemporal transmission model described above. we follow standard stochastic simulation methodology 521 for discrete-time individual-level transmission models (47), converting infection event rates into probabilities in order to 522 determine who gets infected in each month. we assume that an individual's progression through di erent infection states 523 following infection occurs independently of the rest of the epidemic (i.e. is either governed by internal biological processes or 524 random external processes of detection), which enables the simulation of an individual's full infection history from the point of 525 infection. so that we can simulate durations of pkdl infectiousness, we fit a negative binomial distribution nb(r5, p5) to the observed 527 pkdl onset-to-treatment times and onset-to-resolution times for self-resolving pkdl cases in the data: given these pieces of information, the simulation algorithm proceeds as follows: negative binomially distributed (48). the pmf of a size-biased negative binomial random variable x ú corresponding to 542 x ≥ nb is: and assign a pkdl infectious by drawing from cat({h1, h2, h3, hu}, p) . 562 ii. else the individual recovers without developing pkdl, so draw a recovery time: figure 575 s5 and table s4 respectively. based on the deviance distributions and dic values, the best-fitting model is the model with 576 additional within-household transmission and the highest level of relative pre-symptomatic and asymptomatic infectiousness 577 (both 2% as infectious as vl). hence, we focus on the output of this model in the main text and below. and ") and incubation period distribution parameter p for the di erent models are shown in table s4 . the parameter estimates 580 are very similar across the di erent models and vary in the way expected -the spatial transmission rate constantand 581 background transmission rate ' are lower for models with additional within-household transmission (" > 0) and decrease with 582 increasing relative asymptomatic infectiousness h4, and the mode foris slightly larger for models with " > 0 (since a flatter 583 kernel shape compensates for the extra within-household transmission). the posterior distributions for the incubation period 584 distribution parameter p correspond to a mean incubation period of 5.7-6.9 months (95% hpdis (4.8,6.6)-(6.0,7.8) months). the log-likelihood trace and posterior distributions for the parameters for the best-fitting model are shown in figure s6 . the parameters are clearly well defined by the data, as the posterior distributions di er significantly from the weak prior 587 distributions. the corresponding autocorrelation plots are shown in figure s7 . the high degree of autocorrelation evident for all the 589 parameters is due to strong correlation between the transmission parameters and the missing data, in particular between the 590 spatial transmission rate constantand the asymptomatic infection times. figure s8 shows thatis strongly negatively 591 correlated with the mean asymptomatic infection timeā. this is expected since a higher overall transmission rate leads to show that there is some negative correlation betweenand ', -and ', and " and p. these correlations are not surprising: the 603 more transmission that is explained by proximity to infectious individuals (the higher -), the less needs to be explained by the 604 background transmission (the lower '); the flatter the spatial kernel (the larger -), the fewer infections need to be explained by 605 the background transmission; and the more infections are accounted for by transmission within the same household (the higher 606 "), the longer the incubation period (the lower p) needs to be (due to long times between onsets of cases in the same household). the acceptance rate for the transmission parameter updates (step 1 in the mcmc algorithm) was 23. demonstrate that the data augmentation algorithm works as expected. figure s10 shows the incidence curve of vl and pkdl 615 cases for the whole study area and the inferred incidence curve of asymptomatic infections (averaged over the mcmc chain). the number of asymptomatic infections increases and decreases with the number of vl cases as expected given the assumption 617 that the incidence ratio of asymptomatic to symptomatic infection is fixed. the posterior probabilities that individuals were asymptomatically infected during the study (shown in figure s11 , with is higher). this is as expected given the structure of the model (the decrease in the risk of infection with distance from an 622 infectious individual encoded in the spatial kernel) and the estimates of the transmission parameters. the examples shown in figure s12 demonstrate that non-symptomatic individuals' asymptomatic "infection" time posterior distributions (red). note that asymptomatic "infection" in months 0 and t + 1 = 109, represent asymptomatic infection before the study and no asymptomatic infection before the end of the study, respectively. (a) individual who migrated into a house with an active vl case from outside the study area in month 53 and therefore had a high initial probability of asymptomatic infection, followed by further peaks in asymptomatic infection risk in months 64 and 69 with the pkdl and vl onsets of two other household members in months 63 and 68 respectively. (b) individual born in month 2 with a high probability of having avoided asymptomatic infection for the duration of the study. (c) individual who was 23-years-old at the start of the study with a moderately high risk of having been asymptomatically infected before the study and a small peak in asymptomatic infection risk when a fellow household member had vl onset in month 45. the role of case proximity in transmission of visceral leishmaniasis in a highly endemic village in 649 visceral leishmaniasis in the indian subcontinent: modelling epidemiology and control feasibility of eliminating visceral leishmaniasis from the indian subcontinent: explorations with a set 653 of deterministic age-structured transmission models elimination of visceral leishmaniasis in the indian subcontinent: a comparison of predictions from 655 three transmission models risk factors for kala-azar in bangladesh increasing failure of miltefosine in the treatment of kala-azar in nepal and the potential role of parasite 658 drug resistance, reinfection, or noncompliance single locus genotyping to track leishmania donovani in the indian subcontinent: application in nepal seasonal and nocturnal landing/biting behaviour of phlebotomus 662 argentipes (diptera: psychodidae) studies on seasonal man sandfly (phlebotomus argentipes) contact at night biting rhythm & biting activity of phlebotomid sandflies seasonal distribution of phlebotomine sand flies-vector of visceral leishmaniasis use of rk39 for diagnosis of post kala-azar dermal leishmaniasis in nepal bait preference of phlebotomus argentipes (ann. & 672 brunn.) the importance of systems ecology: implications of vector-based field studies and quantitative analysis of 674 vector control programs spatial distribution of phlebotomus argentipes (diptera: psychodidae) in eastern 677 case study evaluating multispatial resolution remotely sensed environmental evidence and microclimatic 678 relative abundance of phlebotominae sandflies with emphasis on vectors of kala-azar national vector borne disease control programme, guidelines on vector control in kala-azar elimination (year?) policy recommendations from transmission modelling for the elimination of visceral leishmaniasis in 683 the indian subcontinent infectiousness of visceral leishmaniasis, post kala-azar dermal leishmaniasis and asymptomatic subjects 685 with their capacity to infect sand fly vectors by xenodiagnosis quantifying the infectiousness of post-kala-azar dermal leishmaniasis towards sandflies clinical and immunological aspects of post-kala-azar dermal leishmaniasis in bangladesh quantification of parasite load in clinical samples of leishmaniasis patients: il-10 level correlates with 692 parasite load in visceral leishmaniasis study of parasite kinetics with antileishmanial drugs using real-time 694 quantitative pcr in indian visceral leishmaniasis relapse after treatment with miltefosine for visceral leishmaniasis is associated with increased infectivity of 696 the infecting leishmania donovani strain monitoring of parasite kinetics in indian post-kala-azar dermal leishmaniasis quantification of the natural history of visceral leishmaniasis and consequences for control. parasites 700 & vectors characterization and identification of suspected counterfeit miltefosine 702 capsules loss of leishmanin skin test antigen sensitivity and potency in a longitudinal study of visceral leishmaniasis 704 in bangladesh age trends in asymptomatic and symptomatic leishmania donovani infection in the indian 706 subcontinent: a review of data from diagnostic and epidemiological studies estimating parameters in stochastic compartmental models using markov chain methods bayesian inference for partially observed stochastic epidemics bayesian analysis for emerging infectious diseases linking time-varying symptomatology and intensity of 715 infectiousness to patterns of norovirus transmission coupling and ergodicity of adaptive markov chain monte carlo algorithms accelerating adaptation in the adaptive metropolis hastings random walk algorithm weak convergence and optimal scaling of random walk metropolis algorithms. the 722 annals appl optimal scaling for various metropolis-hastings algorithms examples of adaptive mcmc deviance information criteria for missing data models the deviance information criterion: 12 years on (with discussion) methods to infer transmission risk factors in complex outbreak data di erent epidemic curves for severe acute respiratory syndrome reveal similar impacts of control 733 measures a bayesian approach to quantifying the e ects of mass poultry vaccination upon the spatial and 735 temporal dynamics of h5n1 in northern vietnam dynamics of the 2001 uk foot and mouth epidemic: stochastic dispersal in a heterogeneous 737 landscape the mathworks, inc julia: a fresh approach to numerical computing julia v1.0.5. the julia project 3 37 2 3 nw 998 830 18 3 24 4 4 nw 2185 1768 197 35 124 22 5 nw 934 774 15 2 22 3 6 nw 1640 1363 94 22 77 18 7 se 604 493 42 7 95 16 8 se 585 496 8 0 18 0 9 se 969 809 33 6 45 8 10 se 701 595 17 3 32 6 11 se 1300 1080 28 9 29 9 12 se 2807 2388 102 25 47 12 13 se 933 816 36 7 49 10 14 se 446 391 23 3 65 9 15 se 660 574 15 3 29 6 16 nw 905 762 26 9 38 13 17 nw 2080 1764 75 13 47 8 18 nw 3212 2653 61 11 26 5 19 nw 774 647 62 18 107 31 total 24781 20798 1018 190 54 key: cord-102850-0kiypige authors: huang, c.-c.; lai, j.; cho, d.-y.; yu, j. title: a machine learning study to improve surgical case duration prediction date: 2020-06-12 journal: nan doi: 10.1101/2020.06.10.20127910 sha: doc_id: 102850 cord_uid: 0kiypige predictive accuracy of surgical case duration plays a critical role in reducing cost of operation room (or) utilization. the most common approaches used by hospitals rely on historic averages based on a specific surgeon or a specific procedure type obtained from the electronic medical record (emr) scheduling systems. however, low predictive accuracy of emr leads to negative impacts on patients and hospitals, such as rescheduling of surgeries and cancellation. in this study, we aim to improve prediction of operation case duration with advanced machine learning (ml) algorithms. we obtained a large data set containing 170,748 operation cases (from jan 2017 to dec 2019) from a hospital. the data covered a broad variety of details on patients, operations, specialties and surgical teams. meanwhile, a more recent data with 8,672 cases (from mar to apr 2020) was also available to be used for external evaluation. we computed historic averages from emr for surgeonor procedure-specific and they were used as baseline models for comparison. subsequently, we developed our models using linear regression, random forest and extreme gradient boosting (xgb) algorithms. all models were evaluated with r-squre (r^2), mean absolute error (mae), and percentage overage (case duration > prediction + 10 % & 15 mins), underage (case duration < prediction 10 % & 15 mins) and within (otherwise). the xgb model was superior to the other models by having higher r^2 (85 %) and percentage within (48 %) as well as lower mae (30.2 mins). the total prediction errors computed for all the models showed that the xgb model had the lowest inaccurate percent (23.7 %). as a whole, this study applied ml techniques in the field of or scheduling to reduce medical and financial burden for healthcare management. it revealed the importance of operation and surgeon factors in operation case duration prediction. this study also demonstrated the importance of performing an external evaluation to better validate performance of ml models. it becomes more and more important for clinics and hospitals in managing resources for 2 critical cares during the covid-19 pandemic. statistics show that approximately 60 % 3 of patients admitted to the hospital will need to be treated in the operation room 4 (or) [11] , and the average cost of or is up to 2,190 dollars per hour in the united 5 states [1, 6] . hence, the or is considered as one of the highest hospital revenue 6 generators and accounts for as much as 42 % of a hospital's revenue [6, 10] . based on 7 these statistics, a good or schedule and management is not only critical to patients 8 who are in need of elective, urgent and emergent operations, but is also important for 9 surgical teams to be prepared. owing to the importance of or, improvement of or 10 efficiency has high priority so that the cost and time spent on or is minimized while the 11 utilization of or is maximized to increase surgical case number and patient access [15] . 12 in a healthcare system, numerous factors are involved in affecting or efficiency, for 13 example patient expectation and satisfaction, interactions between different professional 14 specialties, unpredictability during operations, surgical case scheduling and etc [20] . 15 although the process of or is complex and involves multiple parties, one way to 16 enhance or efficiency is by increasing the accuracy of predicted surgical case duration. 17 over-or under-utilization of or time often leads to undesirable consequences such as 18 idle time, overtime, cancellation or rescheduling of surgeries, which may implement 19 negative impact on the patient, staffs and hospital [21] . in contrast, high efficiency in 20 or scheduling not only contribute to better arrangement for the usage of operating 21 room and resources, it can also lead to cost reduction and revenue increment since more 22 surgeries can be performed. 23 currently, most hospitals schedule surgical case duration by employing estimations 24 from surgeon and/or averages of historical case durations, and studies show that both of 25 these methods have limited accuracy [14, 17] . for case length estimated by surgeons, 26 factors including patient conditions, anesthetic issues might not be taken into 27 consideration. moreover, underestimation of case duration often occurs as surgeon 28 estimations were usually made by leaning towards maximizing block scheduling to 29 account for potential cancellations and cost reduction. furthermore, operations with 30 higher uncertainty and unexpected findings during operation add difficulties and 31 challenges into case length estimation [14] . historic averages of case duration for a 32 specific surgeon or a specific type of operation obtained from electronic medical record 33 (emr) scheduling systems have also been used in hospitals. however, these methods 34 have been shown to produce low accuracy due to large variability and lack of same 35 combination in the preoperative data available on the case that is being performed [25] . 36 in order to improve the predictability, researchers utilized linear statistical models, 37 such as regression, or simulation for surgical duration prediction and evaluation of the 38 importance of input variables [8, 12, 13] . however, a common shortcoming of these 39 studies is that relatively lesser input variables or features were used in their models due 40 to the limitation of statistical techniques in handling too many input variables. similarly, we combined categories for primary surgeon's id, specialty, anesthesia type 101 and room number which had case numbers less than 50 into the category of 'others'. in addition, since operation case duration can be related to the performance of 103 surgeons and surgeons' performance is affected by their working time, we also analysed 104 . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted june 12, 2020. . figure 1 . the workflow of model training for this study. the data set used for model training fall within the time range of jan 1, 2017 to dec 31, 2019. from this data set, about 17 % of the cases were excluded based on these criteria: patients with two or more surgical procedures performed at the same time, emergent and urgent cases, surgeons with age under 28, patients with age younger than 20, pregnant patients, procedure duration longer than 10 hours or less than 10 minutes and cases with missing value. the total number of cases included in the data set for model building was 142,448. this data set was then split into training (80 %) and validation (20 %) subsets for model development. machine learning and linear regression models were developed on the training data set and validated on the validation data set using r-square and mean absolute error. percentage of cases with actual duration differences falling within 10 % and 15 minutes of predicted procedure duration was also computed. eventually, the models were further evaluated on the most recent surgical cases (from mar 1 to apr 30, 2020) which were not included in the original data set for model training. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. 105 surgical minutes performed by the same primary surgeons on the same day as well as 106 within the last 7 days, and the number of urgent and emergent operations prior to the 107 case that was being performed by the same surgeon were included in the analysis. together, 24 predictor variables were included for predictive model building in this 109 study. these predictors can be categorised into 5 groups: patient, surgical team, 110 operation, facility and primary surgeon's prior events (see table 1 ). model development and training 112 we applied multiple ml methods for operation case duration prediction. operation case 113 duration (in minutes) is the total period starting from the time patient entering into the 114 or to the time exiting the or. historic averages of case durations based on 115 surgeon-specific or procedure-specific from emr systems were used as baseline models 116 for comparison in case duration prediction. at the beginning, we performed multivariate 117 linear regression (reg) to predict operation case duration. however, when we looked at 118 the distribution of operation case duration, it was observed to be skewing to the right 119 ( fig. 2) . we performed logarithmic transformation on operation case duration to reduce 120 the skewness. the model built from log transformed multivariate linear regression 121 (logreg) outperformed reg in all evaluation indexes. subsequent ml algorithms were 122 also trained by using the log transformed case duration as the target. the first ml algorithm that we tested is random forest (rf), a tree-based 124 supervised learning algorithm. rf uses bootstrap aggregation or bagging technique for 125 regression by constructing a multitude of decision trees based on training data and 126 outputting the mean predicted value from the individual trees [19] . bagging technique 127 is unlikely to over-fitting, in other words, it reduces the variation without increasing the 128 bias. tree-based techniques were suitable for our data since they include a large number 129 of categorical variables, e.g. icd code and procedure type, most of which were sparse. 130 the number of trees that was set in study is 50. extreme gradient boosting (xgb) 131 algorithm is the other supervised ml algorithm that was tested for comparison to rf. 132 recently, xgb algorithm gains popularity within the data science community due to its 133 ability in overcoming the curse of dimensionality as well as capturing the interaction of 134 variables [18] . xgb is also a decision tree-based algorithm but more computationally efficient for 136 real-time implementation than rf. xgb and rf algorithms are different in the way of 137 how the trees are built. it has been shown that xgb performs better than rf if 138 parameters are tuned carefully, otherwise it would be more likely to over-fitting if the 139 data are noisy [3, 9] . we adopted 5-fold cross validation strategy to tune out the best 140 . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. . data-splitting strategy was used in the training for all the models to prevent 145 over-fitting consequences. we randomly separated the data into training and testing 146 subsets at a ratio of 4:1. the training data were used to build different predictive 147 models as well as to extract important predictor variables. the testing data were used 148 for internal evaluation of the models.in addition to interval evaluation, external 149 evaluation on all the models were performed using data from mar 1 to apr 30, 2020. surgeon-or procedure-specific calculated from emr were also evaluated on the same 154 internal and external testing sets to ensure fair and uniform comparison across all 155 models. data processing and cleaning as well as model development in this study were 156 performed using r software. the packages "xgboost and "randomforest were used to 157 implement xgb and rf algorithms in r [4, 5] . 164 r 2 is the coefficient of determination, it represents the proportion of the variance for 165 the actual case duration that is explained by predictor variables in our models. mean absolute error (mae) measures the average of errors between the actual case 167 6/15 . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. hand, the average model based on a specific procedure had lower percentage underage 188 and overage compared to the surgeon-specific model. these differences were due to an 189 extensive procedure classification in the procedure-specific model. however, the 190 percentage underage was still quite high. since no other information is taken into 191 consideration in the average model, except durations of operation cases happened in the 192 past, prediction bias and low accuracy usually result from the average model. 193 we first fitted the reg model by including all the input variables showed in table 1 . is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. when its performance is better on training set but poor on testing set. when we log transformed operation case duration and re-ran a regression model (i.e. 203 logreg), the performance of logreg model improved and outperformed reg model. [12, 23] . again in the logreg model, the results of all the evaluation metrics were 208 close for training, internal and external training sets, so the model was not over-fitting. 209 although performance of the logreg model was not bad, an assumption of linear performance of the xgb model was better than the rf model on training set but did 219 not improve a lot compared to the rf model on internal and external testing sets. since xgb was more computing efficient than rf, the xgb model was chosen to be 221 the best model and was used in subsequent analysis. in addition to the three key metrics, we studied inaccuracy of different models by 223 using external testing set. we calculated the total prediction error (in minutes) and the 224 corresponding inaccurate percentage for all the models. the results are reported in 225 in fig. 3 , we plotted scatter plots of actual versus predicted duration on the external 234 testing set for the average models of surgeon-and procedure-specific, and the xgb 235 model. a straight line indicating the theoretical perfect relationship, i.e. predicted and 236 actual procedure duration are identical, was added as a reference in each scatter plot. the data points of the xgb models were aligned closer to the straight line. therefore, 238 the xgb model showed a higher correlation between predicted and actual duration 239 compared to the other two types of average model. fig. 4 shows the density plot of 240 differences between actual and predicted case durations for the two average models and 241 the xgb models. it clearly demonstrates that the error distribution of xgb model was 242 narrower and closer to 0. as a result, the xgb model is more accurate than the other 243 models in predicting operation case duration. 244 . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. . . density plot of differences between the actual operation case durations and predicted case durations obtained from the xgb model (light blue color) was narrower and centered more at 0 than density plots of those obtained from the average models (pink and cyan colors). in the average models, previous operation case durations, either averaging for a specific surgeon (cyan color) or specific procedure (pink color), were used as predictions. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. how important the variable is in making a branch of a decision tree to be purer [5, 22] . 248 a higher wfg percentage indicates that the variable is more important. the result of 249 the top 15 important variables are shown in table 4 . one thing worth noting is that 3 250 of the top 4 important variables are attributed to operation information. moreover, 251 three of the features which we computed from surgeons' data (i.e. total surgical minutes 252 performed by the surgeon within the last 7 days and on the same day, and number of accurate prediction of operation case duration is vital in elevating or efficiency and 257 reducing cost. this study not only helps to improve accuracy of or case prediction, it 258 also has novelty in the following aspects. first, the data set used in this study contained 259 more than 140,000 cases and more than 400 different types of surgical procedures which 260 set up a new benchmark for huge amount and large diversity. the maximal number of 261 cases that had been used in other studies were in the range of 40,000 to 60,000 [2, 21] . second, or events was modeled as dependent events instead of independent. to this 263 end, we extracted some additional information from surgeons' data, e.g. previous . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. . https://doi.org/10.1101/2020.06. 10.20127910 doi: medrxiv preprint april 2020 as external testing data for model evaluation. fourth, though urgent and 268 emergent surgeries were excluded from the data, number of urgent and emergent 269 operations prior to the case that was being performed by the same surgeon was included 270 as an input variable to account for its effect on operation case duration. currently, surgical cases at cmuh are scheduled according to estimates made by 272 primary surgeons. however, surgeon estimates rely heavily on prior experiences of the 273 surgeons and many factors beyond expectation will not be taken into consideration. since there is no formal record on surgeon estimates, we used averages calculated based 275 on a specific surgeon or procedure type on the testing set to be our baseline models. the performance of these two average models, as reported in table 2 , clearly showed 277 that these models were poor in predicting operation case duration. they also tended to 278 under-predict operation case duration according to their scatter plots of actual versus 279 prediction and density plot of differences between actual versus prediction (see fig. 3 280 and 4). when 24 feature variables ( table 1) were included in our model development, 281 r 2 , mae, percentage underage, overage and within improved greatly compared to the 282 baseline models. we applied 15 minutes as tolerance threshold for percentage underage, 283 overage and within because â± 15 minutes is an acceptable periodic range in cmuh to 284 be considered as accurately booking. to avoid having too stringent standard and to 285 better compare our outcomes with other studies [2, 24] , tolerance threshold of 10 % was 286 also applied. by using regression and ml approaches, we were able to decrease the total 288 prediction error (table 3 ) of operation case durations at cmuh. among all the models, 289 performance of the xgb model was considered to be the best because it was more 290 computing efficient and had the lowest inaccuracy. moreover, even though the results of 291 evaluation metrics of the rf model were similar to the xgb model, the xgb model 292 was still able to reduce the total prediction error in minutes from 223,686 to 218,415 293 minutes. in other words, the xgb model was able to save more than 5,000 minutes of 294 idle or delay times than the rf model. since most ors usually have multiple cases 295 scheduled per day, the total prediction error represents the cumulative effect of total 296 or cases in the 2-month period of mar to april 2020. this cumulative effect may 297 eventually reflects a significant financial advantage in scheduling an additional 298 operation case [7] . this would also lead to a significant cost reduction and increment in 299 revenue because ors are utilized appropriately and efficiently. it has been reported in the past studies that primary surgeons contributed the 301 largest variability in operation case duration prediction compared to other factors 302 attributed to patients [2, 16, 23] . these studies provide evidence and rationale that more 303 factors relating to primary surgeon should be added as input variables in the training of 304 ml models. moreover, extensive feature engineering usually improve the quality of ml 305 model which can be independent to the modeling technique itself. as a result, in 306 addition to primary surgeon's identifier, gender and age, we computed previous working 307 time and number of previous surgeries performed by the same primary surgeons within 308 the last 7 days and on the same day. we also counted the number of urgent and 309 emergent operations prior to the case that was being performed by the same primary 310 surgeon. these variables extracted from the data of primary surgeon were significantly 311 (p < 0.05) correlated with operation case duration (see table 5 in appendix). the 312 correlation coefficients of these variables also revealed that an operation case duration 313 performed by a primary surgeon may decrease as he or she becomes more familiar with 314 the surgical procedure but may increase if his or her total surgical minutes are too long. 315 although performing a surgery multiple times on different patients may help a primary 316 surgeon to be more efficient in his or her next operation, long working time may also 317 lead to lethargic and affect the primary surgeon's performance. in the methodology of data processing, for predictor variables which contained a lot 319 . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june 12, 2020. . of categories, we grouped categories that had cases less than 50 into a categories named 320 'others'. in addition to reducing data dimensionality for categorical features, this may 321 aid in generalization of our model. this indicates that our model will still be able to 322 predict case duration even for operations that are rare. moreover, our model can be 323 applied to new primary surgeons, who are not included in the training set during model 324 development, by setting their id as 'others' for case duration prediction. however, 325 there is still a need to update our model after a while, for example, when the operation 326 cases performed by a new primary surgeon has increased beyond a certain number. in 327 terms of timing, we recommend updating the model annually by using operation cases 328 performed in the most recent 3 years as training data. one limitation in this study is that we selected predictor variables which could only 330 be extracted from preoperative data. our ml model still needs to be improved in order 331 to be able to predict surgical case duration dynamically. for example, blood loss during 332 operation may affect case duration as an unexpected increase in blood loss may cause 333 surgeons to take longer time to complete the surgery. therefore, it would be better if 334 intra-operative data are incorporated during ml model development and prediction 335 made by the ml model can be updated during operation. one common issue in all ml 336 studies in predicting operation case duration, including our study, is that ml models 337 were developed using data from a single site. these ml models have difficulties in 338 generalization, since the surgical team, facilities and patient populations are different 339 across entities. it has to be custom made for a given organization using training data 340 containing its patients, procedures, surgeons, medical staffs, and the facility itself. as a 341 result, the exact same ml model is not meant to and will not perform well when 342 applied to another organization or hospital. the other interesting issue of applying ml 343 or artificial intelligence in operation estimation is that medical technologies evolve fast. 344 hence, how frequent should a ml or artificial intelligence model need to be updated 345 still remains to be answered. the xgb model was superior in predictive performance when comparing to the average, 348 the reg and the logreg models. the total inaccuracy of predicted outcomes of the xgb 349 model was the lowest among the other models developed in this study. although the 350 performance of the rf model was close to the xgb model, the xgb model was more 351 computing efficiency than the rf model in which it took shorter time to complete the 352 training process. the coefficient of determination (r 2 ) was higher while percentages of 353 under-and over-prediction of the xgb model built in this study were also lower than 354 other ml studies [2, 21, 24] . moreover, this model improves the current or scheduling 355 method which is based on estimates made by surgeons at cmuh. 356 we propose extracting additional information from operation and surgeons' data to 357 be used as predictor variables for ml algorithm training since their importance was 358 high in the xgb model. moreover, we validated the model types using an external 359 testing set in additional to the internal testing set split from the original data used in 360 model training. this helped us to validate and test the models in a more stringent and 361 rigorous way. therefore, we suggest external evaluation should be used as a tool to 362 better validate the predictive power of ml models in the future. 363 1 appendix . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted june 12, 2020. table 5 . correlation coefficient, standard error, t-value and p-value of predictor variables extracted from primary surgeons' data. these information were obtained from the log transformed multivariate regression (logreg) model. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted june 12, 2020. . https://doi.org/10.1101/2020.06.10.20127910 doi: medrxiv preprint optimization and planning of operating theatre activities: an original definition of pathways and process modeling improving operating room efficiency: machine learning approach to predict case-time duration a comparative analysis of xgboost package 'randomforest' title breiman and cutler's random forests for classification and regression package 'xgboost' type package title extreme gradient boosting understanding costs of care in the operating room decrease in case duration required to complete an additional case during regularly scheduled hours in an operating room suite predicting the unpredictable: a new prediction model for operating room times using individual characteristics and the surgeon's estimate greedy function approximation: a gradient boosting machine factors that influence the expected length of operation: results of a prospective study surgical unit time utilization review: resource utilization and management implications surgical duration estimation via data mining and predictive modeling: a case study use of simulation to assess a statistically driven surgical scheduling system improving predictions of pediatric surgical durations with supervised learning the surgical scheduling problem: current research and future opportunities tree boosting with xgboost -why does xgboost win "every" machine learning competition? tree boosting with xgboost -why does xgboost win "every" machine learning competition? newer classification and regression tree techniques: bagging and random forests for ecological prediction operating room efficiency improved prediction of procedure duration for elective surgery decision tree methods: applications for classification and prediction. shanghai archives of psychiatry surgeon and type of anesthesia predict variability in surgical procedure times a machine learning approach to predicting case duration for robot-assisted surgery relying solely on historical surgical times to estimate accurately future surgical times is unlikely to reduce the average length of time cases finish late the authors would like to thank shu-cheng liu, jhao-yu huang and min-hsuan lu in 365 providing feedback during the progress of this study. key: cord-022034-o27mh4wz authors: olano, juan p.; peters, c.j.; walker, david h. title: distinguishing tropical infectious diseases from bioterrorism date: 2009-05-15 journal: tropical infectious diseases doi: 10.1016/b978-0-443-06668-9.50124-1 sha: doc_id: 22034 cord_uid: o27mh4wz nan bioterrorism can be defined as the intentional use of infectious agents or microbial toxins with the purpose of causing illness and death leading to fear in human populations. the dissemination of infectious agents with the purpose of attacking livestock and agricultural resources has similar motives. many of the agents that could potentially be used in bioterror (bt) attacks are also responsible for naturally occurring infectious diseases in the tropics. as such, naturally occurring outbreaks must be differentiated from bt attacks for public health, forensic, and security reasons. if a bt attack occurs in tropical underdeveloped countries, owing to their weak public health infrastructure, the public health implications would be even more dramatic than in developed countries. an outbreak of smallpox due to a bt attack would probably require vaccination and mandatory quarantine of millions of people in order to control the outbreak and quell global public unrest. this chapter will concentrate on selected infectious agents that have the potential to be used as bioterror agents in human populations. the first step in managing the damage from a covert biological dissemination is recognition of the attack and the organism(s). as in most emerging infections, we predict that in bioterrorist attacks the etiological diagnosis will be made by a clinician or pathologist and the recognition of a bioterrorist event will be through geographical and epidemiological anomalies. we have very limited environmental detection capability at this time, and there are no comprehensive pointof-care diagnostics for most of the high-impact bt agents. some diseases such as inhalational anthrax or smallpox may be relatively readily recognized by an alert clinician because of their very distinctive presentation in many cases. however, the leading edge of a bt epidemic may arrive on a pathologist' s doorstep without prior suspicion. for example, individual cases of pneumonic plague as the earliest harbingers of an attack will presumably present as community-acquired pneumonia and probably die without clinical diagnosis. given the short window available for successful treatment, the recognition of these earliest cases is paramount. sartwell 1 has demonstrated empirically that incubation periods follow a log-normal distribution, which results in "front-loading" of cases ( fig. 119-1 ). delay in recognizing the epidemic through reliance on syndromic surveillance or other surrogates will likely result in most of the cases of diseases such as plague and tularemia being well into their disease course and perhaps unsalvageable. 2 bioterrorist events will enlarge our knowledge of tropical diseases. for example, inhalational anthrax and several viral hemorrhagic fevers (vhf) thought to be transmitted mainly by aerosol 3 are under-represented in naturally occurring case series, and a bt attack would provide an opportunity to answer questions about the underlying host factors and pathogenesis. indeed, the extension of the risk population to include children, the elderly, and the immunosuppressed is likely to provide considerable insight into these oftenunderstudied groups. it is also likely that our lack of information about them will challenge our current diagnostic algorithms. in october 2001, anthrax spores were distributed covertly in the u.s. postal service, leading to 22 cases of human anthrax and billions of dollars spent on controlling the potentially devastating effects of a small inhalational anthrax epidemic. 4, 5 this attack was by no means the first intentional attempt to use infectious agents as weapons of terror. ever since the times of the ancient greeks and romans, humans have tried to inflict damage by the use of contagion on other populations. 6, 7 less than 4% of the people or groups responsible for terrorist attacks on human populations take responsibility for their actions. 8 therefore, the use of biological weapons is ideal to conduct covert attacks. in addition, it has been estimated that to kill the same number of human beings with biological weapons as compared to chemical or nuclear weapons, the cost is far less with biological weapons ($2/human casualty) compared with chemical ($2000/ human casualty) and nuclear ($2,000,000/human casualty) weapons. 6 hypothetical bt attacks would range from an overt attack of a large city with a bomb containing several kilograms of an agent (weaponized bacteria, viruses, or toxins) to discrete or covert intentional release of the infectious agent through a delivery system, such as spray devices, postal service, ventilation ducts, water supplies, and food supply. based on transmissibility, severity of morbidity and mortality, and likelihood of use (availability, stability, weaponization), potential bt agents are divided into three categories (table 119-1) . this chapter will concentrate on selected agents from categories a and b and on the diagnostic challenges posed by illnesses caused by such agents. table 119 -1 are capable of producing illness under natural circumstances. therefore, the first challenge is to identify the infectious agent responsible for a certain disease correctly, followed by a thorough epidemiologic and microbiologic analysis of the epidemic or outbreak. in some circumstances, the identification of a bt attack would be obvious. a case of smallpox in any human population is an international emergency that would trigger a massive response of the public health systems around the world. sophisticated epidemiological investigations would follow in order to characterize the outbreak, identify the source, and possibly label it "intentional." in other cases, the identification of the outbreak as secondary to intentional dissemination of an infectious agent will require the use of sophisticated epidemiological and molecular tools, especially for diseases endemic to the area where the outbreak occurs. the need to use genetic sequences as markers has spawned a new discipline referred to as microbial forensics, sister to phylogenetics and "molecular epidemiology." differentiation between natural infections and a biological warfare attack rests firstly on disease patterns given by several epidemiological clues. they include presence of disease outbreaks of the same illness in noncontiguous areas, disease outbreaks with zoonotic impact, different attack rates in different environments (indoor versus outdoor), presence of large epidemics in small populations, increased number of unexplained deaths, unusually high severity of a disease for a particular pathogen, unusual clinical manifestations owing to route of transmission for a given pathogen, presence of a disease (vector-borne or not) in an area not endemic for that particular disease, multiple epidemics with different diseases in the same population, a case of a disease by an uncommon agent (smallpox, viral hemorrhagic fevers, inhalational anthrax), unusual strains of microorganisms when compared to conventional strains circulating in the same affected areas, and genetically homogenous organisms isolated from different locations. 9, 10 these are a few guidelines that could prove helpful when investigating an outbreak, but it has to be kept in mind that the deduction will not be based on any single finding but rather the pattern seen in its totality. first and foremost, the possibility of an attack must be ever in mind, or differentiation of a covert bt attack and a natural outbreak of an infectious disease may not be made. in fact, the outbreak of salmonellosis in oregon in 1984 was due to a covert attack planned by the rajneeshee leadership and accompanied by distinctive epidemiological clues. it was not labeled as intentional until somebody came forward with the information leading to the responsible group; as in most of medicine, the unsuspected diagnosis is the easiest to miss. 11 an increasing number of public health departments are now acquiring the technology necessary to perform syndromic surveillance. this new method of surveillance is based on syndromic disease rates such as respiratory, gastrointestinal, and neurological syndromes or analysis of other health-related activities such as laboratory test requests and results, purchasing rates for certain pharmaceutical agents, unexplained death rates, and veterinary surveillance. 2, 10, 11 the purpose of syndromic surveillance is to detect a bt attack as early as possible by analyzing the previously mentioned variables by extracting and analyzing data through computer networks. the rationale behind syndromic surveillance is the nonspecific nature of early signs and symptoms of many of the illnesses caused by bt agents. examples of proposed syndromes are as follows: gastroenteritis of any apparent infectious etiology, pneumonia with the sudden death of a previously healthy adult, widened mediastinum in a febrile patient, acute neurologic illness with fever, and advancing cranial nerve impairment with weakness. 12 a key component factors affecting syndromic surveillance include selection of data sources, definition of syndrome categories, selection of statistical detection thresholds, availability of resources for follow-up, recent experiences with false alarms, and criteria for initiating investigations. it must be emphasized that these systems are experimental and not yet of proven value in managing bt attacks. they are expensive, require follow-up confirmation, have unproven sensitivity and specificity, and ultimately depend on the clinician. 2 they may prove to be more useful in managing an event than in expeditiously detecting one. conventional epidemiological investigations are by no means obsolete with the availability of more sophisticated methods to study possible bt attacks. they include the confirmation of an outbreak once it is suspected. confirmation is based in many cases on laboratory analysis of patients' samples or autopsy material. a case definition is constructed to increase objectivity of the data analyzed and to enable determination of the attack rate. other variables are included in the analysis, such as time and place, and an epidemiological curve can be constructed. 10 epidemiological curves are an important tool to analyze epidemics and suggest the mode of transmission and propagation. a point source epidemic curve is classically log-normal in distribution 1 and would suggest a common exposure of a population to an infectious agent. of course, there can be variations depending on the presence of susceptible subpopulations (e.g., children, immunosuppressed, aged) and on varying doses of the agent. propagative curves are more characteristic of highly communicable agents such as smallpox. a short description of selected category a and b agents follows. all these pathogens are addressed as naturally occurring disease agents in other chapters of this book. bacillus anthracis (anthrax) b. anthracis (see chapter 39) is without a doubt the microorganism that has received the most attention as a bt agent due to its high lethality (inhalational form), ease of propagation, and high environmental stability. fortunately, the disease is not transmitted from person to person. however, the first three characteristics make it one of the ideal bioweapons. anthrax presents in humans as four different clinical syndromes, depending on the portal of entry: cutaneous (the most common form of the disease resulting from contact with infectious animal products), gastrointestinal and oral/oropharyngeal (both secondary to ingestion of contaminated meat), and inhalational (woolsorter' s disease), secondary to inhalation of spores from the environment. in the event of a bioterror attack, either overt or covert, the clinical presentation of the patients affected by the attack would be that of inhalational anthrax. this form of anthrax is so rare that a single case of inhalational anthrax should raise immediate suspicion, as dramatically demonstrated during the bt attacks in the fall of 2001. [13] [14] [15] during those attacks, 50% of cases were cutaneous anthrax thought to be secondary to handling of anthraxlaced mail envelopes or environmental surface contamination in the presence of minor cutaneous lesions, providing a portal of entry for the spores. 5 an outbreak of inhalational anthrax also took place in sverdlovsk (former soviet union) as a result of an accidental release into the air of b. anthracis spores from a facility producing anthrax for the bioweapons program in the ussr. 5, [16] [17] [18] inhalational anthrax should be suspected clinically in any individual presenting with fever and a widened mediastinum on chest radiograph (due to hemorrhagic mediastinitis). 19, 20 the incubation period is normally 3 to 5 days, but in some cases it can be as short as 2 days and as long as 60 days depending on inoculum and the time of germination of the spore. 17 based on research performed on rhesus monkeys, the ld 50 is estimated to be 8000 to 10,000 spores. [21] [22] [23] however, as few as 1 to 3 spores may be capable of producing a fatal outcome in approximately 1% of those exposed to these quantities. 24 the initial symptoms are nonspecific and consist of fever, malaise, anorexia, fatigue, and dry cough. these symptoms are followed in 3 to 4 days by an abrupt onset of respiratory insufficiency, stridor, diaphoresis, and cyanosis. the subsequent clinical course is rapid, and patients usually die within 24 to 36 hours after clinical deterioration. mortality is 100% without antibiotic therapy. 20, [25] [26] [27] early diagnosis, aggressive treatment with antimicrobial agents to which the bacteria are susceptible, and aggressive supportive therapy decreased the mortality to 40% in the 2001 attacks. 5 pathologic studies performed on the sverdlovsk victims confirmed some of the findings in animal models of inhalational anthrax, such as hemorrhagic lymphadenitis and mediastinitis. however, many patients also developed hematogenous hemorrhagic pneumonia. pleural effusions were usually large and frequently led to severe lung atelectasis. in about half of cases, hemorrhagic meningitis developed, leading rapidly to central nervous system (cns) manifestations terminating in coma and death. 16, 28, 29 yersinia pestis (plague) y. pestis (see chapter 42) is a gram-negative, aerobic, nonsporulating coccobacillus, member of the enterobacteriaceae with a wide host range, including rodents, felines, and humans. 30 the most important reservoirs are urban rats, and its main vector is the rat flea. in rural epizootics, reservoirs include prairie dogs and squirrels in the united states. 31 y. pestis has been responsible for some of the most devastating pandemics in human history in the preantibiotic era (6th, 14th, and 19th centuries). 32 public health measures have made this disease a rarity in the united states (around 20 cases/year) and around the world, although approximately 1000 cases are reported to the world health organization (who) every year (countries reporting plague include madagascar, tanzania, and peru, among others). clinical presentation in naturally acquired infections takes five forms, namely bubonic, septicemic, pneumonic, cutaneous, and meningeal. the pneumonic form is the most likely presentation in a case of plague due to a bt attack. it is worth mentioning that plague has already been used as a bt agent when japan dropped thousands of y. pestis-infected fleas over china leading to small outbreaks of bubonic plague in continental china during world war ii. 33, 34 the incubation period for pneumonic plague is short, ranging from 2 to 3 days. it is the rarest form in natural infections (1% or less) but has the highest mortality, reaching 100% in untreated patients. the initial presentation is nonspecific and consists of cough, fever, and dyspnea. cough may be productive (bloody, purulent, or watery in the initial phases). this is followed by a rapid clinical course leading to respiratory failure and the patient' s demise if not treated with antibiotics early in the course of the disease. 30, 31, 35 the factors that led to the severe manchurian pneumonic plague outbreaks in the early 20th century are unknown, but weather, hygiene, and crowding were important factors. more recent outbreaks worldwide and particularly in the united states have been much smaller and readily controlled. pneumonic cases are common in the united states, but secondary transmission has been rare in the last 50 years. modeling of pneumonic transmission using eight small outbreaks to derive the parameters find average of secondary cases per primary case (ro) to be approximately 1.3 prior to any control measures. 36 this is one of the most scientifically neglected microorganisms with bt potential. tularemia is a zoonotic infection caused by a strictly aerobic, gram-negative, nonsporulating small coccobacillus. two subspecies are recognized, namely f. tularensis subspecies holarctica (jellison type b) and f. tularensis subspecies tularensis (jellison type a). 37 type a is by far the more virulent and is present only in north america. of the bacteria with potential as bt agents, f. tularensis has by far the widest host range, including wild and domestic animals, humans, fish, reptiles, and birds. vectors are also numerous and include ticks, fleas, mosquitoes, and biting flies. 37, 38 this is an impressive range for any human pathogen. in contrast to other diseases described in this chapter, tularemia does not have the remarkable history that some of the other pathogens have. in europe, tularemia was first described in 1532; in the united states, it was first described in 1911 in california in the aftermath of the san francisco earthquake. 38 in natural infections, the most common source of infection is a tick bite and manipulation of infected animals such as wild rabbits. six different clinical syndromes have been described as follows: ulceroglandular, glandular, oculoglandular, pharyngeal, pneumonic, and typhoidal. marked overlap exists among all these forms, and for practical purposes two syndromes (ulceroglandular and typhoidal) have been proposed. [39] [40] [41] as a bt agent, f. tularensis will most likely cause a disease with a primary pulmonary component with secondary dissemination (typhoidal/systemic). in natural infections, both ulceroglandular and typhoidal forms can have a hematogenous pulmonary component, although it is more common in typhoidal forms. pulmonary features include cough, pleural effusions, and multifocal bronchopneumonic infiltrates. if not treated promptly, patients usually develop adult respiratory distress syndrome leading to respiratory insufficiency and the patient' s demise. case-fatality rate approaches 30% if not treated with appropriate antibiotics. 41 smallpox eradication remains the single most important victory in the war against infectious diseases. smallpox (see chapter 58) is the only disease so far eradicated from the face of the earth due to human intervention. the who declared smallpox eradicated in 1980 after the last case of natural disease was diagnosed in somalia in 1977, 42 and vaccination ceased around the world, rendering humankind vulnerable to reintroduction of the virus. [43] [44] [45] a laboratory accident was responsible for two more cases in 1978 in england. this accident prompted the who to restrict the frozen virus to two places in the world: the cdc in atlanta, georgia, and the institute for polyomyelitis and viral encephalitides in moscow, later moved to npo vector, novosibirisk, russia. however, it is suspected that secret military repositories exist after the fragmentation of the soviet union and the subsequent exodus of scientists involved in its bioweapons program (biopreparat). 46, 47 the agent responsible for this disease is an orthopox virus with no known animal reservoir, but high aerosol infectivity, stability, and mortality. although not a category a agent, monkeypox is responsible for outbreaks in africa and is the only other member of the orthopox genus capable of producing systemic disease in humans. the clinical disease is potentially indistinguishable from smallpox, where mortality rates in tropical africa are around 10% to 15%. in may and june 2003, an outbreak of monkeypox occurred in the united states. 48 thirty-seven infections were laboratory-documented and involved humans exposed to infected prairie dogs that had become infected because of contact with infected gambian rats and dormice, two animal species shipped from africa earlier that year. infected humans included veterinarians, exotic pet dealers, and pet owners. the clinical spectrum in this outbreak ranged from asymptomatic seroconversions to febrile illness with papulovesicular rash. no deaths were associated with this outbreak. however, phylogenetic analysis of the virus placed it in the west africa clade as opposed to the central africa clade which carries the previously mentioned case-fatality rate of 10% to 15%. a single case of smallpox would trigger a massive public health response in order to contain the outbreak. an outbreak in germany in 1970 resulted in 19 cases with 100,000 people vaccinated to contain the infection. in 1972, yugoslavia underwent an epidemic with a total of 175 cases (35 deaths) and a vaccination program that included 20 million people in order to contain the outbreak and obtain international confidence. vaccination with the vaccinia virus (a related orthopox virus) is the most effective way to prevent the disease and can be administered up to 4 days after contact with ill patients. strict quarantine with respiratory isolation for 17 days is also mandatory. the newer generation of antivirals that have been developed after the disease was eradicated has never been tested in human populations, but in vitro data and experiments in animal models of poxvirus disease suggest some antiviral activity for the acyclic nucleoside phosphonates such as cidofovir. 49 the only vaccine available in the united states is dryvax, and sufficient doses have been manufactured to cover the entire u.s. population. however, newer vaccines that may have fewer side effects are being developed. the clinical presentation is characteristic. the incubation period ranges from 10 to 12 days. the initial phase is nonspecific, common to other viral syndromes, and is characterized by abrupt onset of fever, fatigue, malaise, and headaches. during this prodromal phase in 10% of patients with fair complexion, a discrete erythematous rash appears on the face, forearms, and hands. the typical smallpox rash has a centrifugal distribution (that is, more abundant on the face and extremities than on the trunk and abdomen). an enanthem also develops with presence of oral ulcerations by the time the exanthem appears. systemic manisfestations begin to subside once the rash appears and can reappear with superinfection of skin lesions or superimposed bacterial bronchopneumonia. progression of the lesions is synchronous (maculopapules, vesicles, pustules). after pustules rupture, scabs form and detach in 2 to 3 weeks, leaving depigmented, scarred areas. this form of the disease, called variola major, is fatal in up to 30% of unvaccinated patients and 3% of vaccinated individuals. various hemorrhagic forms exist. in some cases, the rash progresses very slowly and hemorrhage develops into the base of the lesions, which remain flat and soft instead of tense, carrying a bad prognosis. in some other cases, the disease is hemorrhagic from the beginning, leading to death 5 to 7 days after the initial symptoms appear (case-fatality rate: 100%). finally, in some cases, a severe and overwhelming illness is followed by dusky skin lesions; these patients have a large quantity of virus and are extremely dangerous epidemiologically. previously vaccinated individuals usually develop a milder disease that consists of a mild pre-eruptive phase followed by few skin lesions that appear more superficial, evolve more rapidly, and are not as synchronous as the classical type. 50 viral hemorrhagic fever (vhf; see chapter 65) is caused by a heterogenous group of rna viruses that belong to several different families. the cdc identified filoviruses (ebola and marburg viruses), arenaviruses (lassa, junin, machupo, guanarito, and sabia), and bunyaviruses (crimean-congo hemorrhagic fever [cchf] and rift valley fever [rvf]). [51] [52] [53] the common denominator in these infections is the increased vascular permeability in the microcirculation leading to hemorrhagic diathesis and systemic manifestations such as pulmonary edema and cerebral edema related to leaky capillaries. 54 these viruses usually have a very narrow geographic range determined by their natural reservoirs and vectors. humans are accidental hosts. these diseases have caught great public attention due to their high mortality. this, combined with their aerosol infectivity, has led to the use of biosafety level 4 laboratories in their study. clinical presentation is usually nonspecific and consists of fever and malaise, followed by signs of increased vascular permeability and circulatory compromise. vhf usually terminates in shock, generalized mucocutaneous hemorrhages, and multiorgan failure. differences exist among the clinical details and pathogenesis of the different viruses (see chapter 65 for an overview and the individual chapters for details). for example, vhf due to filoviruses usually have prominent hemorrhagic manifestations and disseminated intravascular coagulation (dic) as a terminal event. rvf virus leads to liver damage, dic, and hemorrhagic manifestations in approximately 1% of patients with severe disease. cchf also behaves like the filoviral infections with prominent hemorrhagic manifestations. lassa fever has few neurologic or hemorrhagic manifestations. the south american arenaviral hemorrhagic fevers usually have hemorrhagic and neurologic components. toxins in the context of bt agents are substances of biologic origin that are capable of producing human illness. toxins are usually proteins synthesized by living bacteria, fungi, or plants. toxins are generally less dangerous than infectious agents. the most potent biological toxin is that from clostridium botulinum and it is 10-fold or more less lethal than anthrax on a weight basis. other toxins such as ricin are more than a 1000-fold less toxic than botulinum toxin and sarin is 30-fold less toxic than ricin. there are seven similar toxins produced by seven different serotypes of c. botulinum (a to g), all leading to the same clinical manifestations and with the same lethality. the toxins have a molecular weight of approximately 150 kda and block neurotransmission at the presynaptic level in cholinergic neurons including the neuromuscular junction, leading to progressive palsies of cranial nerves and skeletal muscle. botulinal toxins are among the most lethal substances known to mankind with ld 50 of 0.001 î¼g/g of body weight when administered parenterally. 25, 55, 56 the aerosol route decreases its lethality 80 to 100 times. both aerosol attacks and contamination of food supplies are potential bt scenarios. clinical manifestations consist of progressive bulbar and skeletal paralysis in the absence of fever, including diplopia, dysphagia, blurred vision, ptosis, dysarthria, dysphonia, mydriasis, dry mucosae, and descending paralysis. 25, 56 the cause of death in lethal cases is respiratory insufficiency due to paralysis of respiratory muscles. onset of symptoms is variable and depends on the inoculum, ranging from 24 hours to several days after exposure. most cases of naturally occurring intoxication are related to consumption of improperly sterilized canned food or ingestion of preserved fish. rare cases of inhalational botulism were documented in germany in the early 1960s due to accidental laboratory exposure. the rapid absorption through the respiratory tract may offer a different pathogenesis and it is not known if antitoxin is useful in therapy, although animal models show efficacy in prophylaxis. all the agents in category a are generally recognized as serious threats for causing extensive casualties. categories b and c are much more heterogeneous. they were considered to provide significant threat potential but there are continuing reassessments. these conditions are caused by the genus alphavirus, family togaviridae (eastern, western, and venezuelan equine encephalitis [vee] viruses; see chapter 74). natural infections are usually transmitted by mosquitoes, but aerosol transmission is the notorious cause of numerous laboratory infections and is the basis of its historic weaponization. 52, 57 most of these viruses cause systemic illness characterized by fever, myalgias, and prostration. clinically apparent involvement of the central nervous system is present in some cases and varies among the different viruses. eastern equine encephalitis (eee) is by far the most virulent, leading to case-fatality rates of 50% to 75%, and survivors usually have severe neurologic sequelae. 58, 59 vee, in contrast, leads to cns manifestations in no more than 4% of cases and almost all vee infections are symptomatic even in the absence of cns involvement. [60] [61] [62] rickettsia prowazekii (epidemic typhus) and r. rickettsii (rocky mountain spotted fever) typhus (see chapter 51) is another disease that has played a historic role in human populations. [63] [64] [65] [66] millions of people perished in world war i and world war ii due to epidemic, louse-borne typhus. large outbreaks of the disease still occur in tropical regions around the world in areas stricken by war, famine, and poverty. rocky mountain spotted fever (rmsf), on the other hand, is transmitted by tick bites and occurs endemically in south and central america as well as north america. rickettsiae target the microvascular endothelium leading to leaky capillaries systemically. 67 the main causes of morbidity and mortality are noncardiogenic pulmonary edema and cerebral edema leading to diffuse alveolar damage and meningoencephalitis. clinical manifestations are nonspecific and include fever, malaise, headache, myalgias/arthralgias, cough, nausea, vomiting, confusion, stupor, and coma in severe cases. skin rash ranges from maculopapular to petechial, depending on the severity, and is observed in around 90% of patients with rmsf and 2% to 100% of cases of epidemic typhus, depending on the darkness of cutaneous pigmentation. rickettsiae are remarkably underestimated biothreats as they are highly infectious by low-dose aerosol exposure, possess a stable extracellular form, and are resistant to most empirically administered antibiotics, including î²-lactams, aminoglycosides, and macrolides, and are exacerbated by sulfonamides. case-fatality rates can be as high as 40% to 50% without antibiotic therapy and 3% to 5% with adequate antibiotic coverage. lethal cases are usually due to delayed diagnosis. 64, 65, 68 these rickettsiae are highly infectious by aerosol and are potent bt agents. they are often discounted because of their susceptibility to tetracycline and chloramphenicol. however, the severity of the illness, the exhaustion of antibiotics in the face of a mass attack, and the existence of antibiotic-resistant organisms suggest they are still formidable players. this gram-negative, obligately intracellular bacterium has a high degree of infectivity (one organism is capable of causing infection by inhalation) and low lethality. [69] [70] [71] [72] the distribution of q fever is worldwide and results from exposure to animals such as sheep, cattle, goats, cats, rabbits, and others. c. burnetii has spore-like characteristics that can withstand harsh environmental conditions and be transported by wind to other places. in natural infections, 60% of cases are asymptomatic and are diagnosed by seroconversion. in symptomatic cases, the presentation is nonspecific and includes malaise, fever, myalgias, cough, chills, headaches, anorexia, weight loss, and in some cases pleuritic chest pain. hepatomegaly and splenomegaly are sometimes observed, although not frequently. transmission occurs by exposure to infected animal products (meat, milk). less common routes of infection are inhalational and cutaneous. the clinical presentation of brucellosis is highly variable, even after inhalational exposure. the clinical spectrum ranges from asymptomatic seroconversion to severe acute systemic disease. intermediate forms include undulant fever or chronic disease, characterized by presence of brucella in virtually any organ. acute systemic disease is highly incapacitating with high fever, headache, nausea, vomiting, chills, severe sweating, and, in very severe cases, delirium, coma, and death. undulant fever is characterized by relapses of fever, weakness, generalized aching, and headache. chronic infections may have manifestations related to several organ systems such as the gastrointestinal and genitourinary tracts, cns, joints, and bones. [73] [74] [75] developing countries with insufficient water treatment and food security are more vulnerable to enteric bt attack. these agents include shigella dysenteriae, salmonella spp., enterohemorrhagic e. coli, vibrio cholerae, and cryptosporidium parvum. shigella and salmonella have in fact already been used as agents of biorevenge or biopolitics in small-scale attacks: one (shigella) in an office setting by a disgruntled employee and one in oregon by a religious sect that led to nearly 1000 cases of salmonella-related gastroenteritis. 11, 76 these agents are indeed ideal for small-scale attacks since large-scale attacks would require contamination of large water supplies which, because of enormous dilution factors and susceptibility of all these agents (except for c. parvum) to standard chlorinating procedures, would decrease the number of bacteria to below that required to infect large numbers of people. 69 occasional outbreaks of nontyphoidal salmonella and shigella infections occur in the united states. shigella is a highly infectious organism that requires very low numbers (10 2 -10 3 organisms) to provoke clinical disease. the illness caused by shigella and enterohemorrhagic e. coli is explosive and starts with fever, vomiting, severe abdominal cramping, bloody diarrhea, and systemic manifestations such as hypotension, and circulatory collapse if not treated rapidly. both microorganisms produce an exotoxin responsible for most of the systemic manifestations associated. a distinct complication, hemolytic uremic syndrome, occurs in a small percentage of cases, being more common in children younger than 10 years of age, leading to renal failure and hemolysis. salmonella is less infectious and less explosive than shigella, and leads to fever, vomiting, diarrhea, abdominal cramping, and in some cases to typhoidal manifestations. imported cases of v. cholerae have been diagnosed in the united states in the past. however, the disease occurs in southern asia and latin america as large outbreaks. the clinical illness is characterized by explosive watery diarrhea that leads to rapid dehydration and circulatory collapse. c. parvum infections are characterized by watery diarrhea and abdominal cramping for 2 to 3 weeks. the disease is self-limited except in patients with acquired immunodeficiency syndrome (aids) or other conditions of compromise, in whom illness can last for months or years if immune function is not restored. c. parvum is resistant to standard chlorine concentrations in water supplies. 77 the largest outbreak in this country occurred in milwaukee in the early 1990s and was responsible for thousands of cases and increased mortality among those with aids. 69, 78, 79 this section addresses other toxins considered of potential bt use, such as staphylococcal enterotoxin b (seb) and ricin toxin (derived from castor beans, which in turn are the fruit of the ricinus communis plant). the ricin toxin is composed of two glycoproteins of approximately 66,000 kda. 80 the toxin inhibits protein synthesis by blocking elongation factor 2 (ef2) at the ribosomal level. ricin toxin is not a weapon of mass destruction since its lethal dose in humans is much higher than previously believed. however, the use of the toxin in small bt attacks is possible in the tropics because of its ready availability and relatively easy extraction from the beans. clinical presentation depends on the route of administration as does the ld 50 . in cases where large amounts of the toxin are ingested, the manifestations include nausea, vomiting, severe abdominal cramping, rectal hemorrhage, and diarrhea. as the clinical course progresses, anuria, mydriasis, severe headaches, and shock supervene leading to the patient' s demise in 2 to 3 days. clinical manifestations usually appear within 10 hours after ingestion of the toxin. inhalational exposure leads to prominent pulmonary manifestations 8 to 28 hours after exposure and fever, dyspnea, progressive cough, cyanosis, and death. histologically, there is widespread necrosis of pulmonary parenchyma and pulmonary edema. a single case of parenteral intoxication was documented. a defector from bulgaria was injected with a pellet containing ricin from a weapon disguised in an umbrella, resulting in local necrosis, regional lymphadenopathy, gastrointestinal hemorrhage, liver necrosis, nephritis, and dic. 81 staphylococcus aureus enterotoxin b (seb) is a 28-kda, heatstable exotoxin produced by certain strains of s. aureus and is responsible for food poisoning after ingestion of the preformed exotoxin in improperly handled food. in bt scenarios, exposure can occur either by inhalation or ingestion leading to seb food poisoning or seb respiratory syndrome. the toxin is highly incapacitating and not very lethal. the dose that causes symptoms in half of exposed persons and ld 50 differ by a magnitude of 5 log scales for inhalational exposure. 82 thus, it is thought of as an incapacitating agent. incubation time after ingestion is short (4-12 hours) followed by explosive vomiting that persists for several hours. weaponization of the toxin as an aerosol is possible due to its high stability. manifestations after inhalation of the seb are related to the respiratory system and consist of fever, cough, chills, myalgias, chest pain, and pulmonary insufficiency due to alveolar edema. general symptoms and signs are universal and consist of multiorgan failure secondary to a cytokine storm. 25 these toxins are superantigens due to their ability to bind to major histocompatibility complex (mhc) class ii molecules on large numbers of lymphocytes and macrophages, leading to a hyperactivation of the immune system and massive cytokine release including interferon-gamma (ifn-î³), tumor necrosis factor-alpha (tnf-î±), interleukin (il-6), and other mediators such as leukotrienes and histamine. 82 the role of the clinical laboratory in the diagnosis of possible cases related to a bt attack is of utmost importance. 83, 84 on the one hand, standard clinical microbiology laboratories will be receiving specimens for diagnostic purposes, and communication with clinicians regarding their suspicions is critical. certain isolates in the laboratory are not pursued further (bacillus spp. is a classic example) unless specifically requested due to the frequent isolation of contaminants with similar characteristics. in addition, handling of certain specimens will require added biosafety level requirements due to their infectivity (table 119-2) . certain samples will have to be shipped to highly specialized laboratories for initial or further work-up. environmental testing is challenging due to the complexity of the samples to be analyzed. 85, 86 this type of testing takes place in highly specialized laboratories and is not undertaken by the standard clinical microbiology laboratory. the bacterial diseases caused by the bt agents outlined in this chapter, with the exception of c. burnetii and rickettsia spp., can be diagnosed by standard isolation techniques in clinical microbiology laboratories. isolation of rickettsiae and the bt viruses requires specialized laboratories with bsl-3 or bsl-4 biocontainment. 87 serological assays are available for detection of antibodies against all bt agents. however, for many organisms serological assays require the presence of rising antibody titers, and therefore the serologic diagnosis is usually retrospective in nature. for some viral diseases, a reliable diagnosis can be established based on elevation of immunoglobulin m (igm) titers in the acute phase of the disease. with the advent of molecular techniques, rapid and sensitive diagnostic tests are becoming available for bt agents during the acute phase of the disease. [88] [89] [90] this is of utmost importance in a bt event since identification of the first cases would be critical for a rapid and effective public health response. in addition, treatment and prophylactic measures can also be initiated as quickly as possible. molecular diagnostic techniques can be applied to potential bt agents in an additional setting: as part of the epidemiological and forensic investigations that a bt attack would immediately trigger. postmortem diagnosis is also possible by analysis of frozen or paraffin-embedded tissues by immunohistology or nucleic acid-based amplification techniques. rapid diagnosis of the initial case (cases) in a bt event requires a high degree of clinical suspicion from the physicians having contact with such patients in the emergency room or outpatient setting. the clinical laboratories would then play a critical role in detecting the suspected agent and/or referring the appropriate specimens to higher level laboratories for specialized testing (table 119-3) . 83, 85, 91 several of the agents discussed in this chapter are zoonotic diseases. therefore, diagnosis of certain zoonotic diseases in animals may be important in identifying some bt attacks. in such situations, animals could be seen as either direct victims of the attack or as sentinel events in a human outbreak. there are currently efforts to establish a network of laboratories dedicated to diagnosis of veterinary agents. 85 bsl-1 suitable for work involving well-characterized agents not known to cause disease in healthy bacillus subtilis adult humans and of minimal potential hazard to laboratory personnel and the environment. naegleria gruberi canine hepatitis virus bsl-2 suitable for work involving agents of moderate potential hazard to personnel and the measles virus environment. laboratory personnel have specific training in handling pathogenic agents salmonella spp. and are directed by competent scientists; access to the laboratory is limited when work toxoplasma spp. is being conducted; extreme precautions are taken with contaminated sharp items; and hepatitis b virus certain procedures in which infectious aerosols or splashes may be created are conducted in biological safety cabinets or other physical containment equipment. bsl-3 suitable for work with infectious agents which may cause serious or potentially lethal coxiella burnetii disease as a result of exposure by the inhalation route. in addition to the requirements rickettsia spp. described for work in bsl-2 environment, all procedures are conducted within biological m. tuberculosis safety cabinets, or other physical containment devices, and by personnel wearing alphaviruses appropriate personal protective clothing and equipment. laboratory should be located in a separate building or an isolated zone within a building. laboratories are equipped with double door entry, directional inward flow, and single-pass air. bsl-4 required for work with dangerous and exotic agents that pose a high individual risk of filoviruses aerosol-transmitted laboratory infections and life-threatening disease. members of the arenaviruses laboratory staff have specific and thorough training in handling extremely hazardous infectious agents. they are supervised by competent scientists who are trained and experienced in working with these agents. access to the laboratory is strictly controlled by the laboratory director. the facility is either in a separate building or in a controlled area within a building, which is completely isolated from all other areas of the building. all activities are confined to class iii biological safety cabinets, or class ii biological safety cabinets used with one-piece positive pressure personnel suits ventilated by a life support system. the biosafety level 4 laboratory has special engineering and design features to prevent microorganisms from being disseminated into the environment. the diagnosis of inhalational anthrax is based on isolation and identification of b. anthracis from a clinical specimen collected from an ill patient. in cases of inhalational anthrax, samples of sputum, blood, or cerebrospinal fluid (csf) may yield growth of the agent. demonstration of b. anthracis from nasal swabs has more epidemiological and prophylactic implications than clinical importance. standard diagnostic techniques are based on visualization and isolation in the clinical microbiology laboratory and serological demonstration of antibodies against b. anthracis. [92] [93] [94] [95] [96] visualization of b. anthracis from clinical specimens (blood cultures, csf, and cutaneous lesions) by gram stains is not difficult. b. anthracis appears as large gram-positive, spore-forming rods with a bamboo appearance. isolation is achieved by inoculating standard sheep blood agar plates, and colonies appear as small, gray-white, nonhemolytic colonies. a selective medium (polymyxin-lysozyme-edta-thallous acetate agar) is available mostly for environmental samples and inhibits the growth of other bacillus spp., such as b. cereus. growth is rapid (24-48 hours) . 93 confirmatory tests include î³-phage lysis, detection of specific cell wall and capsular antigens, and polymerase chain reaction (pcr) amplification of dna followed by sequencing. 90 serological tests available for clinical diagnosis are based on detection of antibodies directed against protective antigen (pa). cross-reactive antibodies decrease the specificity of this test. assays based on toxin detection are available in specialized centers and are based on capture of anthrax toxins by using antibodies. antibody-coated immunomagnetic beads are then analyzed by electrochemiluminescence technology. the analytical sensitivity of this technique for detection of anthrax toxin is at the picogram to femtogram level (10 â��12 to 10 â��15 ). 97, 98 immunoliposomal technology combined with real-time pcr (for a dna reporter sequence) is also in the early stages of development for several toxins (ricin, cholera, and botulinum) and appears promising with analytical sensitivity in the attomolar to zeptomolar (10 â��18 to 10 â��21 ) range for cholera toxin. 99 the specificity of this assay is given by the toxin-capturing antibody. nucleic acid amplification techniques (pcr) are also available both in standard format and real-time format. extraction of dna from spores is challenging and requires modification of dna extraction protocols in order to facilitate release of dna from spores or induction of germination prior to dna extraction. 90 real-time pcr tests have been developed by applied biosystems (taqman 5' nuclease assay) and roche applied science (lightcycler). [100] [101] [102] the analytical sensitivity of both techniques is extremely high, and testing times have been decreased to 1 to 2 hours. portable pcr instruments are being developed for rapid deployment to the field. 103 examples include the rugged advanced pathogen identification device (rapid), 100 the smartcycler (cepheid, ca), 101 and the miniature analytical thermal cycler instrument (matci) developed by the department of energy' s lawrence livermore national laboratory. 104 this instrument later evolved into the advanced nucleic acid analyzer (anaa) and handheld advanced nucleic acid analyzer (hanaa). 105 molecular subtyping of b. anthracis is also possible by using the 16s ribosomal rna (rrna) subunit gene, multiplelocus vari-able number tandem repeat analysis of eight genetic loci, and amplified fragment length polymorphism (aflp) techniques. 106, 107 environmental testing also plays a role in the investigation of a bt event. in this setting, detection of b. anthracis relies heavily on molecular techniques for confirmation of potentially contaminated samples (e.g., surfaces, air). 108, 109 postmortem diagnosis is also possible by using gram stains on paraffin-based tissues or immunohistochemical procedures using polyclonal or monoclonal antibodies against various anthrax antigens. diagnosis of y. pestis is based on demonstration of the bacillus in blood or sputa from patients. standard diagnostic techniques in the laboratory include visualization of gramnegative coccobacilli, which by giemsa, wright, or wayson stains reveal a "safety pin" appearance. isolation is performed in blood and mcconkey agar plates on which colonies appear as nonlactose fermentors. the organisms are identified preliminarily by direct immunofluorescent assay with y. pestisspecific antibodies, with final identification based on biochemical profiles in clinical microbiology laboratories. 110 molecular diagnostic techniques based on real-time pcr have become available in recent years and involve detection of y. pestis genes such as plasminogen activator (pla), genes coding for the yop proteins and the capsular f1 antigen, and the 23s rrna gene, which allows distinction from other yersinia spp. [111] [112] [113] assays have been developed to detect resistance to particular antibiotics. the importance of these diagnostic techniques in a disease such as plague is evident. the log-normal epidemic curve with a narrow dispersion of the incubation periods (see fig. 119 -1) and the short interval for successful antibiotic therapy mandate recognition of the earliest cases if the bulk of the exposed are to be saved. molecular subtyping of y. pestis is also possible by analyzing polymorphic sites in order to identify the origin of strains in the event of a bt attack. diagnosis is made in the clinical laboratory by demonstration of the microorganisms in secretions (sputa, exudates) by direct immunofluorescence or immunohistochemically in biopsy specimens. isolation in the clinical laboratory may be achieved by using regular blood agar plates, posing a risk to laboratory personnel not employing bsl-3 facilities and procedures. the procedure for isolation of f. tularensis in the laboratory is very similar to that described for y. pestis. final identification in the clinical laboratory is based on the biochemical profile. 114 molecular diagnostic techniques are based on pcr detection of f. tularensis by using primers for different genes such as outer membrane protein (fop) or tul4 and real-time detection systems. 90, 115, 116 smallpox virus diagnosis of variola major is suggested by its clinical presentation and the visualization of guarnieri bodies in skin biopsy samples. preliminary confirmation requires visualization of the typical brick-shaped orthopox virus by electron microscopy, followed by isolation from clinical specimens and accurate molecular identification to differentiate it from the morphologically (and sometimes clinically) similar monkeypox virus. confirmation of this diagnosis is performed only under bsl-4 containment facilities at the cdc. 47 molecular techniques are based on pcr amplification using real-time or standard technology followed by sequencing or use of restriction fragment length polymorphism (rflp) for accurate identification. 117 technologies so far developed for smallpox molecular testing include taqmanand lightcycler-based assays with primers designed for the hemagglutinin gene and a-type inclusion body proteins. [118] [119] [120] [121] sequencing of the smallpox genome has been completed for some asian strains of variola major and one of variola minor. other strains are being sequenced and will provide more information for probe design and treatment targets. 90 diagnosis of these diseases is performed in highly specialized centers in the united states because special isolation procedures and highly contained laboratories are required. initial diagnosis of these diseases is suspected on clinical and epidemiologic grounds. laboratory diagnosis involves isolation, electron microscopy, and serological assays. immunohistochemical detection of hemorrhagic fever viral antigens in paraffin-embedded tissues is also performed in highly specialized centers such as the cdc. [122] [123] [124] [125] [126] molecular diagnostic techniques have also improved dramatically during the last few years. serum or blood is the most common specimen used for reverse transcriptase-pcr amplification of viral nucleic acids. both standard and realtime techniques are available. design of primers for this heterogenous group of rna viruses that are highly variable is one of the limitations. 90 therefore, multiplex pcr techniques are required to detect as many targets as possible in a single assay. 127, 128 real-time pcr based on detection of the target sequence using fluorescent probes therefore limits the number of targets that can be identified because of the limited wavelength range for fluorescent applications (usually only four different wavelengths can be detected at the same time). [128] [129] [130] the use of microchips containing several thousands of oligonucleotides from all viruses known to be pathogenic to humans is an encouraging development. in fact, the rapid identification and characterization of the novel human coronavirus responsible for the sars outbreak in 2003 is an excellent example of the power of hybridization-based microchips. the creation of an automated and easily deployable instrument capable of detecting all possible potential bt agents based on highly sensitive techniques such as electrochemoluminescence (ecl) or pcr would be ideal. the nonspecific nature of presenting symptoms is a major problem with several of the agents. the rapid recruitment of cases into the infected cohort requires that an early diagnosis of the epidemic be established, particularly for organisms such as y. pestis in which there is only a short window for successful treatment. in fact, such projects are already in the making. an example of this system is the automated biological agent testing system (abats) that combines the techniques mentioned previously. 86 the system is the result of integrating several commercially available technologies into a single automated and robotized instrument for detection of viruses, bacteria, and parasites considered potential bt agents. the technologies incorporated into this "super system" include automated specimen preparation (both nucleic acid-based and protein-based such as immunodiagnostics), thermocyclers for pcr detection, chemiluminescent detectors for immunobased assays, sequencers, and software programs for sequence analysis. rickettsia prowazekii (epidemic typhus) and r. rickettsii (rocky mountain spotted fever) diagnosis of these infections in the clinical microbiology laboratory currently rests on the identification of antibodies in serum during the acute and convalescent period in order to demonstrate seroconversion or rising titers. the diagnosis is therefore retrospective. 131, 132 detection of rickettsial dna from blood or skin samples during the acute phase of the disease is possible via pcr assays. however, these assays are not standardized and are not commercially available. primers have been designed for amplification of several rickettsial genes including citrate synthase, 17-kda protein gene, ompa, and ompb. [132] [133] [134] [135] [136] the clinical sensitivity and specificity of standard or real-time pcr techniques have not been determined. most likely real-time pcr is superior due to the higher analytical sensitivity of this technique and low risk of sample contamination with dna amplicons when compared to standard pcr amplification methods. isolation of rickettsiae from clinical specimens is performed in very few specialized laboratories in the nation and requires the use of cell monolayers, embryonated eggs, or animals. detection of rickettsial antigens or whole bacteria in blood specimens is theoretically possible by using ultrasensitive methods, but such assays are currently only in the early phases of development. immunohistochemical detection of rickettsiae in paraffin-embedded tissue has also been applied to tissue samples obtained pre-or postmortem. [137] [138] [139] salmonella spp., shigella dysenteriae, vibrio cholerae, and cryptosporidium parvum (acute enteric syndromes) diagnosis of salmonella, shigella, and vibrio infections is based on isolation of the offending agent on standard microbiological media in the clinical laboratory, followed by specialized confirmatory tests to identify the specific serotype involved. 140 diagnosis of c. parvum is based on visual identification of the protozoan in fecal specimens by using modified trichrome stain. 140 the diagnosis rests on serological demonstration of antibodies by immunofluorescent assay (ifa) or enzyme-linked immunosorbent assay (elisa). antibodies remain elevated for years after the acute infection, and therefore a fourfold rise in titers is the gold standard for diagnosis. pcr detection of c. burnetii dna from blood or tissues also yields a diagnosis of q fever. 88 brucella spp. diagnosis of brucellosis requires a high degree of clinical suspicion due to the protean manifestations related to this disease. laboratory diagnosis is based on isolation of the microorganism from blood, bone marrow, or other tissue samples. isolation is not easy due to the slow-growth of brucella spp. colonies usually appear after 4 to 6 weeks, and therefore communication with the clinical laboratory is important so that appropriate media will be used and the cultures will be held long enough for colonies to be detected. 90 serologic assays for demonstration of rising antibody titers are available, although the diagnosis is retrospective. pcr detection is promising, but it is not standardized. [141] [142] [143] alphaviruses (encephalitic syndromes: venezuelan, eastern, and western equine encephalomyelitis) diagnosis is based on isolation of the virus from serum or brain (postmortem specimens) in a bsl-3 environment. pcr detection of viral sequences is also possible. serologic diagnosis is based on demonstration of antibodies in acute and convalescent sera. [144] [145] [146] botulinum toxins the diagnosis of botulism relies heavily on clinical parameters. an afebrile patient with signs and symptoms of progressive bulbar palsies and descending neuromuscular paralysis is highly suspected of having botulism. demonstration of the toxin in cases of botulism due to ingestion of contaminated food is made from gastric samples, feces, blood, and urine. however, detection of minute amounts of toxin (and contacts with samples from cases may prove fatal due to the toxin' s potency) would be difficult by current immunoassay systems such as elisa platforms. 146 detection techniques based on electrochemiluminescence and immunoliposomes are currently under development. 99, 147 pcr assays can be performed in cases of ingestion of contaminated food in order to detect the genetic material present in c. botulinum. if weaponized toxin is used in the absence of c. botulinum organisms, detection of the genetic material would be difficult and would rely on the presence of residual dna after toxin purification procedures. if inhalational botulism is suspected, respiratory secretions and nasal swabs should be obtained as early as possible. postmortem samples of liver and spleen can be used for detection of botulinum toxins. diagnosis is also based on clinical presentation and requires a high index of suspicion due to the nonspecific nature of the signs and symptoms. laboratory diagnosis rests on detection of the toxin in body fluids by immunoassays (capture elisa and igg elisa). 146 a new generation of tests using more sensitive detection methods is under development (see preceding discussion). diagnosis is also suspected on clinical grounds and confirmed by demonstration of the toxin in nasal swabs early in the disease process, feces, and, in fatal cases, from kidney and lung tissue. serum can be analyzed by elisa, and pcr can be performed for detection of toxin genes of s. aureus if present. 146 the distribution and incubation periods of infectious diseases syndromic surveillance and bioterrorism-related epidemics viral hemorrhagic fevers anthrax bioterrorism: lessons learned and future directions investigation of bioterrorism-related anthrax history of the development and use of biological weapons biological warfare. a historical perspective [see comment laboratory aspects of biowarfare. philadelphia, wb saunders mass casualty management of a large-scale bioterrorist event: an epidemiological approach that shapes triage decisions a large community outbreak of salmonellosis caused by intentional contamination of restaurant salad bars a systems overview of the electronic surveillance system for the early notification of community-based epidemics (essence ii) epidemiologic response to anthrax outbreaks: field investigations bioterrorism-related inhalational anthrax: the first 10 cases reported in the united states first case of bioterrorism-related inhalational anthrax in the united states quantitative pathology of inhalational anthrax i: quantitative microscopic findings the sverdlovsk anthrax outbreak of 1979 death at sverdlovsk: what have we learned? anthrax as a biological weapon: medical and public health management. working group on civilian biodefense updated recommendations for management pathology of experimental inhalation anthrax in the rhesus monkey the pathology of experimental anthrax in rabbits exposed by inhalation and subcutaneous inoculation pathology of inhalation anthrax in cynomolgus monkeys (macaca fascicularis) fatal inhalational anthrax in a 94-year-old connecticut woman [see comment clinical recognition and management of patients exposed to biological warfare agents clinical presentation of inhalational anthrax following bioterrorism exposure: report of 2 surviving patients [see comment recognition and treatment of anthrax pathology of inhalational anthrax in 42 cases from the sverdlovsk outbreak of 1979 pathology and pathogenesis of bioterrorism-related inhalational anthrax yersinia species, including plague yersinia infections: centennial of the discovery of the plague bacillus the bubonic plague factories of death clinical recognition and management of patients exposed to biological warfare agents plague as a biological weapon: medical and public health management. working group on civilian biodefense [see comment epidemiologic determinants for modeling pneumonic plague outbreaks manual of clinical microbiology clinicopathologic aspects of bacterial agents tularemia as a biological weapon: medical and public health management [see comment francisella tularensis tularemia: a 30-year experience with 88 cases the world' s last endemic case of smallpox: surveillance and containment measures virological evidence for the success of the smallpox eradication programme eradication of infectious diseases: its concept, then and now smallpox and its eradication. geneva, world health organization smallpox: an attack scenario smallpox as a biological weapon: medical and public health management. working group on civilian biodefense monkeypox transmission and pathogenesis in prairie dogs potential antiviral therapeutics for smallpox, monkeypox, and other orthopox virus infections viral hemorrhagic fevers including hantavirus pulmonary syndrome in the americas viral agents as biological weapons and agents of bioterrorism hemorrhagic fever viruses as biological weapons: medical and public health management [see comment role of the endothelium in viral hemorrhagic fevers the action of botulinum toxin at the neuromuscular junction botulinum toxins venezuelan equine encephalitis an outbreak of eastern equine encephalomyelitis in jamaica, west indies. i: description of human cases eastern and western equine encephalitis an outbreak of venezuelan equine encephalomeylitis in central america. evidence for exogenous source of a virulent virus subtype an epidemiologic study of venezuelan equine encephalomyelitis in costa rica venezuelan equine encephalitis zinsser h: rats, lice and history rickettsial infections. in lack ee rocky mountain spotted fever and other rickettsioses typhus and its control in russia rickettsia conorii infection of c3h/hen mice. a model of endothelial-target rickettsiosis rickettsioses as paradigms of new or emerging infectious diseases threats in bioterrorism ii: cdc category b and c agents q fever 1985-1998. clinical and epidemiologic features of 1,383 infections epidemiologic features and clinical presentation of acute q fever in hospitalized patients: 323 french cases human brucellosis an overview of human brucellosis medical aspects of chemical and biological warfare an outbreak of shigella dysenteriae type 2 among laboratory workers due to intentional food contamination viability of cryptosporidium parvum oocysts in natural waters surveillance for waterborne-disease outbreaks-united states cryptosporidiosis in children during a massive waterborne outbreak in milwaukee, wisconsin: clinical, laboratory and epidemiologic findings medical aspects of chemical and biological warfare georgi markor--death in a pellet staphylococcal enterotoxin b and related pyrogenic toxins bioterrorism: implications for the clinical microbiologist the role of the clinical laboratory in managing chemical or biological terrorism diagnostic analyses of biological agent-caused syndromes: laboratory and technical assistance automated biological agent testing systems department of health and human services applying molecular biological techniques to detecting biological agents current laboratory methods for biological threat agent identification molecular diagnostic techniques for use in response to bioterrorism a national laboratory network for bioterrorism: evolution from a prototype network of laboratories performing routine surveillance definitive identification of bacillus anthracis-a review pc bacillus and other aerobic endospore-forming bacteria mabs to bacillus anthracis capsular antigen for immunoprotection in anthrax and detection of antigenemia specific, sensitive, and quantitative enzyme-linked immunosorbent assay for human immunoglobulin g antibodies to anthrax toxin protective antigen comparison of a multiplexed fluorescent covalent microsphere immunoassay and an enzyme-linked immunosorbent assay for measurement of human immunoglobulin g antibodies to anthrax toxins an enzymatic electrochemiluminescence assay for the lethal factor of anthrax comparative studies of magnetic particle-based solid phase fluorogenic and electrochemiluminescent immunoassay high-sensitivity detection of biological toxins a field investigation of bacillus anthracis contamination of u.s. department of agriculture and other washington, dc, buildings during the anthrax attack of sensitive and rapid identification of biological threat agents detection of bacillus anthracis dna by lightcycler pcr a handheld real time thermal cycler for bacterial pathogen detection real-time microchip pcr for detecting single-base differences in viral and human dna rapid pathogen detection using a microchip pcr array instrument genetic sleuths rush to identify anthrax strains in mail attacks sequencing of 16s rrna gene: a rapid tool for identification of bacillus anthracis environmental sampling for spores of bacillus anthracis bacillus anthracis contamination and inhalational anthrax in a mail processing and distribution center manual of clinical microbiology development of rrnatargeted pcr and in situ hybridization with fluorescently labelled oligonucleotides for detection of yersinia species 5' nuclease pcr assay to detect yersinia pestis yersinia pestis-etiologic agent of plague francisella and brucella detection of francisella tularensis in infected mammals and vectors using a probe-based polymerase chain reaction detection of francisella tularensis within infected mouse tissues by using a hand-held pcr thermocycler pcr strategy for identification and differentiation of small pox and other orthopoxviruses real-time pcr system for detection of orthopoxviruses and simultaneous identification of smallpox virus gene for a-type inclusion body protein is useful for a polymerase chain reaction assay to differentiate orthopoxviruses the potential of 5â�² nuclease pcr for detecting a single-base polymorphism in orthopoxvirus detection of smallpox virus dna by lightcycler pcr comparative pathology of the diseases caused by hendra and nipah viruses a novel immunohistochemical assay for the detection of ebola virus in skin: implications for diagnosis, spread, and surveillance of ebola hemorrhagic fever. commission de lutte contre les epidemies a kikwit immunohistochemical and in situ localization of crimean-congo hemorrhagic fever (cchf) virus in human tissues and implications for cchf pathogenesis retrospective diagnosis of hantavirus pulmonary syndrome, 1978-1993: implications for emerging infectious diseases hantavirus pulmonary syndrome. pathogenesis of an emerging infectious disease molecular diagnostics of viral hemorrhagic fevers rapid detection and quantification of rna of ebola and marburg viruses, lassa virus, crimean-congo hemorrhagic fever virus, rift valley fever virus, dengue virus, and yellow fever virus by real-time reverse transcription-pcr quantitative real-time pcr detection of rift valley fever virus and its application to evaluation of antiviral compounds development and evaluation of a fluorogenic 5' nuclease assay to detect and differentiate between ebola virus subtypes zaire and sudan rickettsioses as paradigms of new or emerging infectious diseases laboratory diagnosis of rickettsioses: current approaches to diagnosis of old and new rickettsial diseases citrate synthase gene comparison, a new tool for phylogenetic analysis, and its application for the rickettsiae diagnosis of mediterranean spotted fever by cultivation of rickettsia conorii from blood and skin samples using the centrifugation-shell vial technique and by detection of r. conorii in circulating endothelial cells: a 6-year follow-up differentiation of spotted fever group rickettsiae by sequencing and analysis of restriction fragment length polymorphism of pcr-amplified dna of the gene encoding the protein rompa differentiation among spotted fever group rickettsiae species by analysis of restriction fragment length polymorphism of pcr-amplified dna diagnostic tests for rocky mountain spotted fever and other rickettsial diseases immunohistochemical diagnosis of typhus rickettsioses using an anti-lipopolysaccharide monoclonal antibody monoclonal antibody-based immunohistochemical diagnosis of rickettsialpox: the macrophage is the principal target investigation of foodborne and waterborne disease outbreaks rapid laboratory confirmation of human brucellosis by pcr analysis of a target sequence on the 31-kilodalton brucella antigen dna the 18-kda cytoplasmic protein of brucella species-an antigen useful for diagnosis-is a lumazine synthase characterization of an 18-kilodalton brucella cytoplasmic protein which appears to be a serological marker of active infection of both human and bovine brucellosis genus-specific detection of alphaviruses by a semi-nested reverse transcription-polymerase chain reaction standardization of immunoglobulin m capture enzyme-linked immunosorbent assays for routine diagnosis of arboviral infections toxins as weapons of mass destruction: a comparison and contrast with biological warfare and chemical warfare agents sensitive detection of biotoxoids and bacterial spores using an immunomagnetic electrochemiluminescence sensor key: cord-006172-ndmf5ekp authors: akins, paul taylor; belko, john; uyeki, timothy m.; axelrod, yekaterina; lee, kenneth k.; silverthorn, james title: h1n1 encephalitis with malignant edema and review of neurologic complications from influenza date: 2010-09-02 journal: neurocrit care doi: 10.1007/s12028-010-9436-0 sha: doc_id: 6172 cord_uid: ndmf5ekp background: influenza virus infection of the respiratory tract is associated with a range of neurologic complications. the emergence of 2009 pandemic influenza a (h1n1) virus has been linked to neurological complications, including encephalopathy and encephalitis. methods: case report and literature review. results: we reviewed case management of a 20-year old hispanic male who developed febrile upper respiratory tract signs and symptoms followed by a confusional state. he had rapid neurologic decline and his clinical course was complicated by refractory seizures and malignant brain edema. he was managed with oseltamavir and peramavir, corticosteroids, intravenous gamma globulin treatment, anticonvulsants, intracranial pressure management with external ventricular drain placement, hyperosmolar therapy, sedation, and mechanical ventilation. reverse transcriptase polymerase chain reaction analysis of nasal secretions confirmed 2009 h1n1 virus infection; cerebrospinal fluid (csf) was negative for 2009 h1n1 viral rna. follow-up imaging demonstrated improvement in brain edema but restricted diffusion in the basal ganglia. we provide a review of the clinical spectrum of neurologic complications of seasonal influenza and 2009 h1n1, and current approaches towards managing these complications. conclusions: 2009 h1n1-associated acute encephalitis and encephalopathy appear to be variable in severity, including a subset of patients with a malignant clinical course complicated by high morbidity and mortality. since the h1n1 influenza virus has not been detected in the csf or brain tissue in patients with this diagnosis, the emerging view is that the host immune response plays a key role in pathogenesis. the current pandemic of 2009 influenza a (h1n1) (2009 h1n1) virus has presented challenges for clinicians the findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the centers for disease control and prevention. worldwide. neurologic complications of seasonal influenza are likely under-recognized by neurologists and the frequency of acute or post-infectious neurologic complications with 2009 h1n1 virus infection is unknown. it is worth noting the historical relationship between h1n1 and neurology. following the 1918-1919 h1n1 pandemic, an increase was observed in encephalitis lethargica cases [1] . what have neurologists learned about complications of 2009 h1n1 virus infections worldwide? we present a case report of 2009 h1n1-associated encephalopathy and review neurologic complications associated with seasonal influenza and 2009 h1n1 virus infection. the kaiser permanente inpatient neurosurgery service maintains ongoing institutional review board approval for a prospective database registry for clinical research purposes. we identified a case of acute encephalopathy associated with 2009 h1n1 virus infection of the upper respiratory tract referred from an outside kaiser community hospital for management. we conducted a detailed review of the electronic medical records. we also conducted a literature review using pubmed. mesh search terms included influenza, encephalitis, encephalopathy, h1n1, acute necrotizing encephalopathy, and meningitis. a previously healthy 20-year old male college student had 5 days of non-productive cough, rhinorrhea, myalgias, and fever but no headaches or neck stiffness. on the 6th illness day, he presented to the emergency department of a community hospital with lethargy and confusion. he was electively intubated for airway protection. his chest x-ray (cxr) was normal. routine admission laboratory tests including hepatic transaminases were within normal range. a non-contrast head computed tomography (ct) did not reveal any abnormalities (fig. 1, top row) , and he underwent lumbar puncture. cerebrospinal fluid (csf) analysis showed 53 wbc/ll with 91% lymphocytes, 6 rbc/ll, protein 113 mg/dl, and glucose 59 mg/dl. he was diagnosed with meningoencephalitis and started on vancomycin, ceftriaxone, acyclovir, and oseltamivir (150 mg twice daily per nasogastric tube). on the morning of the third-day of hospitalization, he experienced tonic-clonic seizures and remained comatose with extensor posturing afterwards. repeat head ct (fig. 1 , bottom row) demonstrated diffuse brain edema and effaced basal cisterns. he received fosphenytoin, mannitol, and propofol. the treating physicians contacted the neuro-intensive care unit at kaiser sacramento for additional assistance. he was emergently transferred to the kaiser permanente sacramento neuro-intensive care facility (nicu). on arrival, his initial examination demonstrated a glasgow coma scale of 3 (e1v1m1). his repeat cxr did not demonstrate on the bottom row, the small arrow points to effacement of basal cisterns (left) and subcortical brain edema (larger arrows, bottom row, left and right). this subcortical edema is confirmed on mr imaging (fig. 2) any infiltrates or signs of acute respiratory distress syndrome (ards). an external ventricular drain was placed by the neurosurgeon at the bedside. he reported that the csf pressure noted at the time of initial catheter placement was elevated. the first recorded intracranial pressure (icp) was 10 mm hg, and this reading was taken after the expected loss of csf during the procedure. on the second day of nicu hospitalization, his glasgow coma scale (gcs) score was 4 (e1v1m2) and average icp was 7 mm hg. throughout the remainder of the hospitalization, the recorded icp remained below 20 mm hg. initial icp was maintained with external ventricular drainage at 0 cm relative to the external auditory canal and a midazolam infusion (5 mg/h). electroencephalogram (eeg) monitoring demonstrated diffuse, severe slowing in the delta range and no electrographic seizures. on hospital day 3, mri of the brain was obtained (see fig. 2 ). he received 20 days of dual neuraminidase inhibitor treatment (oseltamivir 150 mg twice daily per nasogastric tube, peramavir 600 mg iv daily); intravenous gamma globulin (1 gm/ kg 9 2 days); dexamethasone (10 mg iv load, 6 mg iv every 6 h with taper over 4 weeks); icp monitoring and management; ventilator support; and anticonvulsants (fosphenytoin, levetiracetam). his weekly glasgow scale scores showed delayed improvement (3, e1v1m1, admission): 5 (e2v1m2, week 1), 5 (e2v1m2, week 2), 5 (e2v1m2, week 3), 9 (e3v2m4, week 4). the midazolam infusion was discontinued on hospital day 4, after clinical observation and eeg confirmation that he was not having electrographic seizures. thereafter he received intermittent doses of lorazepam as needed for sedation while on the ventilator. over 3 weeks, neuroimaging demonstrated improvement in his brain edema with restoration of his basal cisterns, and the external ventricular drain was successfully weaned and removed. more rapid weaning of his external ventricular drain was not attempted due to severe neurologic impairments with gcs less than eight and radiographic appearance of diffuse brain edema and effaced basal cisterns. his nicu course was complicated by ventilatorassociated klebsiella pneumoniae and spontaneous pneumomediastinum on day 6 of intensive care. chest ct demonstrated subcutaneous emphysema, mediastinal emphysema, bilateral lower lobe atelectasis, and no pulmonary interstitial emphysema, or pneumothorax. he did not develop adult respiratory distress syndrome or suffer periods of hypoxemia. rt-pcr of an admission nasopharyngeal swab was positive for 2009 h1n1 virus at the california department of public health virology laboratory. rt-pcr analysis of csf samples was negative for influenza a and b viruses, herpes virus type 1, 2, and 6, varicella, enterovirus, and epstein barr virus. nasopharyngeal samples were negative for enterovirus and mycoplasma pcr. bacterial and viral cultures of csf were negative. test results from clinical specimens (blood, endotracheal aspirate, serum, and csf) sent to the california encephalitis project did not reveal an alternative cause. follow-up mri brain imaging (fig. 2b, d) was repeated at 1 month. after 6 weeks, he transitioned to acute rehabilitation, and 1 month later returned home. because he had improved upper extremity use without recovery in his legs, the physiatry staff performed spine mr imaging and no specific cause was identified. at the time of this case report, the patient has returned home with his family. he is talking and interacting with his family normally. he has not returned to college. his gastromy tube has been removed. he has generalized rigidity without tremor or dyskinesia. he is ambulatory but requires a walker due to reduced endurance and leg weakness. fig. 2 magnetic resonance imaging was done at the time of patient transfer a, c to the neuro-intensive care center and at 1 month of treatment b, d with influenza-specific antiviral therapy, corticosteroids, and intravenous gamma globulin therapy. a coronal flair image shows diffuse brain edema with sulcal effacement and symmetric hyperintensities selectively affecting the white matter and sparing cortex and subcortical nuclei such as basal ganglia and thalami. b coronal flair image at 1 month shows resolution of sulcal effacement, marked reduction in white matter hyperintensity, and relative brain atrophy (20 year old patient). c diffusion-weighted imaging on admission showed some increased signal in the periventricular zones that were also bright on t2 and flair sequences consistent with t2 shine-through. d diffusionweighted imaging at 1 month revealed hyperintensity in the caudate and putamen with corresponding decreased signal in adc map and lack of hyperintensities on t2 and flair sequences (see fig 1b) we present a case of a patient with acute encephalitis associated with febrile upper respiratory tract illness due to 2009 h1n1 complicated by seizures and malignant cerebral edema. few adult cases of 2009 h1n1 influenzaassociated acute encephalitis or encephalopathy have been reported to date. descriptions of 2009 h1n1-associated neurologic complications are limited to case reports and small case series, and have been more commonly reported among young children. given the current influenza pandemic, we provide an overview of neurologic complications associated with seasonal influenza and h1n1 (fig. 3 ) and review clinical management and rationale. influenza virus infections can cause human respiratory disease and have been associated with a variety of central nervous system disorders [2] . influenza virus has been rarely detected in csf of patients that developed acute encephalitis/encephalopathy [3] [4] [5] . the systemic inflammatory response syndrome (sirs) to influenza virus infection of the upper respiratory tract is hypothesized to play a prominent role in the more severe stages leading to cytokine dysregulation (''cytokine storm'') in influenzaassociated encephalopathy or encephalitis (iae) patients [6] . elevated cytokines in serum and csf have been reported in patients with seasonal influenza-associated encephalopathy [4, [7] [8] [9] [10] . elevated csf to plasma ratios suggest activation of cytokine production within the cns may have occurred along with the respiratory tract and systemic cytokines [7, 11, 12] . microglia and astrocytes are capable of producing cytokines in the cns [13, 14] . it is known that influenza virus infects and replicates at the nasopharyngeal epithelium leading to extensive damage during infection. below the mucosa, the free nerve endings of the olfactory nerves may also become infected. as seen with herpes simplex viruses, some postulate that influenza virus could penetrate and replicate at the olfactory mucosa and the free nerve endings with resultant axonal transport of virions to the olfactory bulbs, to the olfactory tract, and finally to the brain [15] . there is some literature to support this mechanism when one looks at h5n1, or avian influenza, where mice inoculated intranasally with h5n1 developed cns lesions in the pons, medulla oblongata, and cerebellar nuclei. astrocytes and glial cells were positive for viral antigen but viral replication ceased before 7 days [16, 17] . further study is needed to elucidate the pathogenesis of cns disease complicating influenza a infection. neurologic symptoms associated with influenza can arise at different intervals after the initial influenza illness (fig. 3 , table 1 ). when assessing patients clinically, it is important to determine if the patient has active or recent symptoms (within days) of influenza or if the neurologic symptoms have appeared in a subacute manner. we will first discuss neurologic complications in the setting of recent influenza virus infection and then proceed to complications that present in a delayed manner the development of a confusional state in the setting of influenza illness symptoms and fever raises the possibility of influenza-associated encephalitis or encephalopathy. the degree of encephalopathy varies from a confusional state to obtundation. it is important to recognize that a small portion of cases can rapidly deteriorate to coma and subsequent brain death due to diffuse, malignant cerebral edema. focal and generalized seizures often occur and can be present with either mild or severe cases. the presence of fever and altered mental state should prompt clinicians to pursue csf analysis unless neuroimaging or laboratory studies reveal a contraindication. influenza illness may include upper respiratory symptoms, pneumonia, or diarrhea (more commonly in young children with seasonal influenza). a thorough medical assessment to exclude other causes such as sepsis, metabolic or toxic disorders, structural cns diseases, and other cns infections is warranted. we define encephalitis by the presence of inflammation in the csf or demonstration of viral infection in brain biopsy or autopsy specimens. we define encephalopathy when csf is acellular and brain biopsy or autopsy specimens have failed to demonstrate viral infection. in some cases, this distinction is arbitrary and the case has borderline csf pleocytosis or csf analysis was not performed due to malignant brain edema. a consistent observation is that patients with seasonal influenza-associated encephalopathy rarely ever have evidence of influenza viral rna in csf based on rt-pcr analysis of csf. furthermore, there is no evidence of seasonal influenza virus infection of brain specimens. in one case series, only one out of 18 patients with acute seasonal influenza-associated encephalitis had influenza viral rna detected [5] . terminology for post-infectious encephalitis can be confusing. for example, the international pediatric multiple chronic condition * sometimes classified as adem [18] sclerosis study group [18] listed ten terms that have been used to describe acute disseminated encephalomyelitis (adem). some terms focus on the triggering event, such as post-infectious encephalomyelitis; others on pathologic or pathophysiologic features such as acute demyelinating encephalomyelitis or hyperergic encephalomyelitis. these authors also classify acute hemorrhagic leukoencephalitis, acute necrotizing hemorrhagic leucoencephalitis (also known as acute necrotizing encephalitis, (ane)), and acute hemorrhagic encephalomyelitis as hyperacute forms of adem. these diagnostic terms are of great historical interest. they generally preceded modern neuroimaging and relied more on the clinical and pathologic details. the study group also lumps a diversity of neuroimaging findings under the diagnosis of adem including: ring-enhancing lesions; diffuse and multi-focal regions of t2 hyperintensity with and without associated hemorrhage; multi-focal lesions with associated mass effect (tumefactive lesions); and images with symmetric, bithalamic edema. while we prefer one term (adem) rather than ten terms to describe post-infectious encephalitis, we are concerned that the pathophysiology and outcome of a process leading to the formation of ring-enhancing lesions (demyelinating, for example, acute demyelinating encephalomyelitis) must be radically different than that causing bithalamic edema (necrotizing, for example, ane). in reality, iae presents along a spectrum ranging from milder cases with normal neuroimaging to more malignant cases with abnormal neuroimaging and less favorable outcomes. for the sake of discussion and literature review, we present a simplified classification scheme based on clinical and imaging findings. the iae benign variant can present with fever, confusional state, and seizures but neuroimaging with ct brain or mri brain does not demonstrate any acute abnormalities. csf analysis is within normal limits or has borderline findings. rt-pcr testing for 2009 h1n1 influenza viral rna is positive in upper respiratory secretions but negative when csf is tested [19] [20] [21] . these patients typically recover within 1 week, and most cases have received oseltamavir and anticonvulsants. the initial reports of pediatric cases of 2009 h1n1 encephalopathy in the us were not severe [19] . similarly, other reported adult cases of 2009 h1n1 iae without ards have not been severe with complete recovery [20, 21] . a more recent pediatric case series of 2009 h1n1 iae reported that 2/4 patients had imaging abnormalities and neurologic sequelae [22] , so the treating physicians need to be aware that full recovery is not a certainty. the iae with splenial sign presents with acute febrile respiratory illness and additional neurologic symptoms with a characteristic mri abnormality. we found case reports associated with seasonal influenza but not with h1n1. it has been reported in children, but rarely in adults [23] [24] [25] [26] [27] [28] . encephalopathy is always present and can be severe. seizures are often present. mri imaging demonstrates increased t2 and flair signal and restricted diffusion in the splenium of the corpus callosum. this finding is reversible. the mri finding is not specific and has been reported with other infections, high-altitude brain edema, and certain metabolic states such as hypernatremia [29] . csf analysis is unremarkable. these patients have been treated with oseltamavir and anticonvulsants, and typically recover within 1 month. the iae with posterior reversible leucoencephalopathy syndrome (pres) presents as moderate to severe febrile encephalopathy. this subtype has been reported with seasonal influenza but not specifically with h1n1. the mri imaging appears radiographically identical to pres caused by more typical causes such as pregnancy or malignant hypertension [30, 31] . vascular caliber changes have been observed in these cases; this is non-specific and can be related to infectious vasculitis or pres. given the diverse causes of pres including malignant hypertension, pregnancy, metabolic disorders, and certain medications such as chemotherapeutics and immunosuppressants; it is often difficult to distinguish the pathophysiology of iae in this clinical setting. therapy is focused upon antiviral treatment, corticosteroid administration, and supportive care. iae with malignant brain edema is one of the most challenging subtypes to diagnose and treat. both seasonal influenza and h1n1 can be complicated by severe forms of acute encephalopathy and malignant brain edema [32] [33] [34] [35] . survival in some cases has been achieved with aggressive neuro-intensive case management with other therapies, including administration of antivirals, corticosteroids, immunoglobulin (2 gm/kg in adult patients), hyperosmolar therapy, plasmapheresis, and hypothermia in some cases. one of the goals of treatment is to reduce viral expression with early antiviral treatment and thereby to reduce stimulation of the host inflammatory response. our case presentation illustrates the rapid time course for this complication (see fig. 1 ) and neurocritical care treatment approaches. because of diffuse brain edema, a broad treatment approach using hyperosmolar therapy, intubation, fever control, and sedation were important. to the best of our knowledge, this is the only case description of iae in which an external ventricular drain was utilized, probably because it is difficult to place a catheter into the small, compressed ventricles of patients with diffuse brain edema associated with influenza. another adult case of h1n1 encephalitis has been reported with radiographic findings similar to ours. fugate et al. [35] described an adult with h1n1 influenza-associated acute hemorrhagic leukoencephalitis. like our patient, their case also showed confluent areas of increased t2 signal in the periventricular white matter and centrum semiovale. because of the additional finding of microhemorrhages demonstrated on gradient echo mri sequences, they diagnosed acute hemorrhagic leukoencephalitis or hurst disease. their patient also had restricted diffusion in the basal ganglia (see fig. 2 ). because their patient had severe adult respiratory distress syndrome (ards) with oxygen saturation readings in the range of 70-80%, the authors attributed the basal ganglia findings to hypoxic brain injury. our patient did not have advanced pulmonary disease, hypoxia, or hypotension. care should be taken to distinguish iae with malignant edema from reyes' syndrome in which patients may present with lethargy, confusion, seizures, or coma accompanied by brain edema. reyes' syndrome most commonly occurs in children but has been reported in adults following influenza and aspirin ingestion [36] . it can be distinguished based on the accompanying hepatic abnormalities, hyperammonemia, and hypoglycemia. caution should be taken with any neurosurgical procedures in reyes' syndrome due to increased risk of perioperative bleeding. one of the most devastating complications of seasonal and pandemic influenza is ane [37] [38] [39] . patients develop rapid neurologic deterioration to coma. seizures are often present. initial brain ct may show decreased density in the thalami, and mri of brain demonstrates the characteristic bilateral thalami lesions. this finding may be initially mistaken for ischemic strokes (top-of-basilar syndrome) or venous infarction secondary to thrombosed internal cerebral veins, vein of galen, or straight sinus. it is interesting that there have been case reports for recurrent ane and also familial ane. this suggests that there may be a genetic susceptibility and a gene associated with familial seasonal influenza ane cases has been reported (nuclear pore gene, ranbp2; [40] ). this condition is often fatal or accompanied by permanent neurologic sequelae in surviving cases. it is intriguing that the neuroanatomical changes found in the thalami, midbrain, and cerebellum on neuroimaging correlated with the clinical symptoms reported for encephalitis lethargica, specifically ''sleeping sickness'', ophthalmoparesis, quadriparesis, and delayed parkinsonism (see below). it is conceivable that survivors with less fulminant involvement could manifest a clinical syndrome with symptoms and signs that localize to brainstem structures. a pediatric case of 2009 h1n1-associated ane with bilateral thalamic imaging findings without associated malignant brain edema has been published [41] , but detailed clinical follow-up was not reported. during the subacute period, additional classic neurologic syndromes associated with influenza have been described. post-influenzal cerebellitis is quite uncommon and has been reported rarely in adults [42] [43] [44] . this syndrome was diagnosed in a 31-year old woman who developed ataxia, dysarthria, and truncal titubation 1 month after influenza b virus infection, with neurologic symptoms that resolved gradually after an additional month. ct and mri brain imaging were unrevealing. csf studies detected evidence of the persistence of the np gene of influenza b virus in the csf from samples taken 7 and 9 weeks after the onset of initial influenza illness. a 25-year old woman gradually developed gait and speech problems after influenza a illness that was treated with oseltamavir. csf showed pleocytosis. the cerebellar cortex had increased t2 signal which resolved over an 80 day period. she received pulse intravenous corticosteroid therapy. her symptoms resolved [42] . plasmapheresis [45] and ivig [46] have also been used for this condition. some cases of cerebellitis following viral and mycoplasma illness have developed fulminant cerebellar swelling with secondary brainstem compression, obstructive hydrocephalus, with fatal outcome [47] . interventions with posterior fossa decompression and external ventricular drain placement may lead to a favorable outcome in a child with this severe condition. antibodies to the glutamate receptor have been reported in patients with post-infectious influenza viral cerebellitis [44] . guillain-barre syndrome (gbs) is a subacute, immunemediated disease predominantly affecting the peripheral nervous system. the diagnosis and treatment are wellknown to most neurologists and this condition has been extensively described and reviewed. gbs has been rarely reported in association with seasonal influenza virus infection [48] , but it should be noted that influenza testing is rarely pursued in gbs cases and may be unrevealing. treatment for influenza-related gbs is identical to treatment for other gbs due to other associated causes. monitoring for respiratory compromise due to neuromuscular weakness with timely respiratory support if needed is critical. plasmapheresis or gammaglobulin treatments are also helpful. the precise pathophysiology is uncertain, but molecular mimicry of the infectious agent is presumed to stimulate autoimmune responses. this has been demonstrated to occur in campylobacter jejuni-associated gbs [49] . influenza-associated myositis has been reported with seasonal influenza [50] and h1n1 variant [51] . myalgias are a common symptom of influenza, but some patients develop frank weakness and have elevated serum levels of creatine phosphokinase (cpk). it is more common in children but has been seen in all age groups. the calf muscles are most suspectible, and patients may walk with a stiff gait or toe walk. onset is usually within the first week of infection and spontaneous improvement typically occurs within 2 weeks in most cases. rarely, severe cases can result in myoglobinuria-associated renal failure and compartment syndromes requiring fasciotomies. influenza can also selectively attack specific muscle groups such as the heart (myocarditis). muscle biopsy shows necrosis, regenerating fibers, and occasionally inflammation. post-viral parkinsonism has been reported after an assortment of infections including influenza virus [52] . an outbreak of these cases was temporally noted following the great influenza (h1n1) pandemic of 1918-1919 [53] . patients with this condition respond poorly to medical therapy, and it has an unfavorable prognosis. encephalitis lethargica is also known as von economo encephalitis and sleeping sickness [53] . a wave of such cases was reported following the 1918-1919 influenza a (h1n1) virus pandemic. the cardinal features of this condition are altered consciousness with prolonged somnolence and ophthalmoplegia. after intervals of months to years, survivors are at risk of developing parkinsonism. pathological findings include nerve cell destruction primarily in the midbrain, subthalamus, and hypothalamus [53, 54] . using modern laboratory techniques, formalin-preserved autopsy brain specimens of encephalitis lethargica cases analysed for influenza viral rna were negative [54] . scientists have proposed a ''hit-and-run'' model of early viral-mediated injury with late sequelae [54] . the neurologist oliver sacks [55] drew attention to this mysterious disorder and the discovery of l-dopa, in his book, awakenings later converted to a feature-length movie. the delayed appearance of restricted diffusion in the basal ganglia in our patient and others [35] is concerning for this condition (fig. 2) . we do not know if this indicates that our patient with 2009 h1n1 is at risk of developing postviral parkinsonism, but long-term clinical follow-up will be important. a delayed diffusion neuroimaging abnormality was also reported in the dentate nucleus of a patient with seasonal influenza encephalopathy/splenial sign [42] . we present a case of acute encephalitis associated with 2009 pandemic influenza a (h1n1) virus infection, complicated by malignant brain edema. the emerging hypothesis about acute neurologic complications of seasonal influenza is that the immune response triggered by influenza virus infection of the respiratory tract plays a prominent role in the pathogenesis of neurological manifestations. this hypothesis regarding the development of acute encephalopathy and brain edema is analogous to current theories about the role of the immune system and cytokines in the development of ards with 2009 h1n1 virus infection. we have also provided an overview of the spectrum of acute and post-infectious neurologic complications reported in association with seasonal and pandemic influenza virus infection of the upper respiratory tract. neurologists should be aware of the potential for a wide range of neurologic complications in association with the current 2009 h1n1 pandemic and seasonal influenza. 1918 influenza, encephalitis lethargica, parkinsonism neuropathogenesis of influenza virus infection in mice pcr on cerebrospinal fluid to show influenza associated acute encephalopathy or encephalitis detection of influenza virus rna by reverse-transcription-pcr and proinflammatory cytokines in influenza-associated encephalopathy acute encephalopathy associated with influenza a virus infection th1 and th17 hypercytokinemia as early host response signature in severe pandemic influenza acute encephalopathy associated with influenza a infection in adults systemic cytokine responses in patients with influenza-associated encephalopathy cytokine profiles induced by the novel swine origin influenza a/h1n1 virus: implications for treatment strategies tumor necrosis factor-a, interleukin-1b, and interleukin-6 in cerebrospinal fluid from children with prolonged febrile seizures. comparison with acute encephalitis/encephalopathy sepsis causes neuroinflammation and concomitant decrease of cerebral metabolism microglia in health and disease microglia in diseases of the central nervous system astrocytes are active players in cerebral innate immunity hypothetical pathophysiology of acute encephalopathy and encephalitis related to influenza virus infection and hypothermia therapy highly pathogenic h5n1 influneza virus can enter the cns and induce neuroinflammation and neurodegeneration encephalitis in mice inoculated intranasally with an influenza virus strain originated from a water bird for the international pediatric ms study group. acute disseminated encephalomyelitis neurologic complications associated with novel influenza a (h1n1) virus infection in children novel influenza a (h1n1) presenting as an acute febrile encephalopathy in a mother and daughter surveillance of h1n1-related neurological complications neurological sequelae of 2009 influenza a (h1n1) in children: a case series observed during a pandemic influenza-associated encephalitis-encephalopathy with a reversible lesion in the splenium of the corpus callosum: case report and literature review influenzaassocaited encephalitis/encephalopathy with a reversible lesion in the splenium of the corpus callosum: a case report and literature review transient splenial lesion of the corpus callosum in clinically mild influenza-associated encephalitis/encephalopathy reversible splenial lesionin influenza virus encephalopathy a reversible lesion of the corpus callosum with adult influenza-associated encephalitis/encephalopathy: a case report mild influenza-associated encephalopathy/encephalitis with a reversible splenial lesion in a caucasian child with additional cerebellar features isolated and reversible lesions of the corpus callosum: a distinct entity posterior reversible encephalopathy syndrome and cerebral vasculopathy associated with influenza a infection: report of a case and review of the literature influenza a encephalopathy, cerebral vasculopathy, and posterior reversible encephalopathy syndrome: combined occurrence in a 3 year-old child case of adult influenza type a virus-associated encephalopathy successfully treated with primary multidisciplinary treatments elderly autopsy case of influenza-associated encephalopathy an adult autopsy case of acute encephalopathy associated with influenza a virus acute hemorrhagic leukoencephalitis and hypoxic brain injury associated with h1n1 influenza influenza a virus and reye's syndrome in adults acute necrotizing encephalopathy in a child with h1n1 influenza infection acute necrotizing encephalopathy in a child during the 2009 influenza a (h1n1) pandemia: mr imaging in diagnosis and follow-up infection-triggered familial or recurrent cases of acute necrotizing encephalopathy caused by mutations in a component of the nuclear pore, ran-bp2 mr imaging in novel influenza a (h1n1)-associated meningoencephalitis an adult case of acute cerebellitis after influenza a infection with a cerebellar cortical lesion on mri probable post-influenza cerebellitis acute cerebellar ataxia and consecutive cerebellitis produced by glutamate receptor delta2 autoantibody plasmapheresis improves outcome in postinfectious cerebellitis induced by epstein-barr virus brain spect imaging and treatment with ivig in acute post-infectious cerebellar ataxia: case report acute near-fatal parainfectious cerebellar swelling with favourable outcome guillain barre syndrome and influenza virus infection carbohydrate mimicry between human ganglioside gm1 and campylobacter jejuni lipooligosaccharide causes guillain-barre syndrome benign acute childhood myositis: laboratory and clinical features melting muscles: novel h1n1 influenza a associated rhabdomyolysis viral parkinsonism lack of detection of influenza genes in archived formalin-fixed, paraffin waxembedded brain samples of encephalitis lethargica patients from 1916 to 1920 new york: random house, inc key: cord-001512-u3u2k8hj authors: ding, hua; chen, yin; yu, zhao; horby, peter w; wang, fenjuan; hu, jingfeng; yang, xuhui; mao, haiyan; qin, shuwen; chai, chengliang; liu, shelan; chen, enfu; yu, hongjie title: a family cluster of three confirmed cases infected with avian influenza a (h7n9) virus in zhejiang province of china date: 2014-12-31 journal: bmc infect dis doi: 10.1186/s12879-014-0698-6 sha: doc_id: 1512 cord_uid: u3u2k8hj background: a total of 453 laboratory-confirmed cases infected with avian influenza a (h7n9) virus (including 175 deaths) have been reported till october 2,2014, of which 30.68% (139/453) of the cases were identified from zhejiang province. we describe the largest reported cluster of virologically confirmed h7n9 cases, comprised by a fatal index case and two mild secondary cases. methods: a retrospective investigation was conducted in january of 2014. three confirmed cases, their close contacts, and relevant environments samples were tested by real-time reverse transcriptase-polymerase chain reaction (rt-pcr), viral culture, and sequencing. serum samples were tested by haemagglutination inhibition (hi) assay. results: the index case, a 49-year-old farmer with type ii diabetes, who lived with his daughter (case 2, aged 24) and wife (case 3, aged 43) and his son-in-law (h7n9 negative). the index case and case 3 worked daily in a live bird market. onset of illness in index case occurred in january 13, 2014 and subsequently, he died of multi-organ failure on january 20. case 2 presented with mild symptoms on january 20 following frequent unprotected bed-side care of the index case between january 14 to 19, and exposed to live bird market on january 17. case 3 became unwell on january 23 after providing bedside care to the index case on january 17 to 18, and following the contact with case 2 during january 21 to 22 at the funeral of the index case. the two secondary cases were discharged on february 2 and 5 separately after early treatment with antiviral medication. four virus strains were isolated and genome analyses showed 99.6 ~100% genetic homology, with two amino mutations (v192i in ns and v280a in np). 42% (11/26) of environmental samples collected in january were h7n9 positive. twenty-five close contacts remained well and were negative for h7n9 infection by rt-pcr and hi assay. conclusions: in the present study, the index case was infected from a live bird market while the two secondary cases were infected by the index case during unprotected exposure. this family cluster is, therefore, compatible with non-sustained person-to-person transmission of avian influenza a/h7n9. electronic supplementary material: the online version of this article (doi:10.1186/s12879-014-0698-6) contains supplementary material, which is available to authorized users. human infection with avian influenza a/h7n9 virus was first identified in march 31 of 2013, in china, a total of 453 confirmed cases were found in the world up to date [1] . the seasonal epidemiology is characterized to occur from november through april in china, coinciding well with both seasonal human influenza and h5n1 in birds [2] . almost all cases were hospitalized, and 1/3 of cases died. the fatality is much higher than that for seasonal influenza in the china (0.04%), but it is lower than for cases of h5n1 (60%) [3, 4] . current evidence suggests that human infection appears to be associated with exposure to infected live poultry or contaminated environments, including markets where live poultry are sold [5] [6] [7] . in the light of this opinion, the closure of live bird markets (lbm) has been associated with a reduction in the incidence of human infections [8] . despite the fact that h7n9 remains to be a zoonotic infection of avian origin, there are concerns that the virus show genotypic and phenotypic evidence of partial adaptation to mammals [9] . compared to other subtypes of avian influenza virus, h7n9 virus show increased binding affinity to mammalian-type receptors, and their amount grow up rapidly at the temperatures that are close to the normal body temperature in mammals (although it is lower than that of birds). in addition, they possess pb2 gene mutations that are associated with adaptation to mammals [10] [11] [12] . whilst sequence analyses had shown that the haemagglutinin (ha) and neuraminidase (na) genes of h7n9 virus detected in china show very high homology, whereas the genes for coding internal proteins are diversified [13] . ferret and mouse models confirm that strains isolated from humans could replicate efficiently in both mammalian and human airway cells, with efficient transmissibility by direct contact and modest transmissibility by respiratory droplets [14, 15] . given these signatures of partial adaptation to mammals, it is imperative to closely monitor and investigate all clusters of human h7n9 virus to determine the transmissibility and severity of virus infection, as well as its potential host and pathogen determinants. a few of family clusters of h7n9 infections (in shanghai, jiangsu, shandong, guangdong and beijing) have been described involving two family members. it was concluded that limited person-to-person transmission may occur following close, prolonged, and unprotected contact with the symptomatic index case, while sustained transmission was not found [16] [17] [18] . here we describe an additional cluster, comprised of three laboratory-confirmed cases of human infection with h7n9 virus reported in zhejiang province in january 2014. this is the largest reported cluster of virological confirmed h7n9 cases, and the full genome data of the virus were isolated from all cases and associated with clinical and epidemiological data and their close contacts. all three h7n9 confirmed cases and 25 adult contacts and surveillance cases had provided written consent for the participation in this study and the publication of their individual details. data collection for h7n9 cases was determined by the national for h7n9 cases was determined y f man of china, as a part of the continuing public health outbreak investigation; therefore, it was exempt from assessment by institutional review board. the protocol for collecting epidemiological data and conducting serological test of close contacts were approved by the institutional review board of the china cdc. suspected cases of human infection with h7n9 virus are identified through the chinese surveillance systems for influenza-like illness, severe acute respiratory illness (sari), pneumonia of unexplained origin, and clinical diagnostics of cases of pneumonia. based on the chinese guidance, an individual could be considered as a confirmed case of h7n9 virus infection if the presence of the h7n9 virus is verified by real-time reverse transcriptase polymerase chain reaction assay (rt-pcr), virus isolation, or serologic testing [19] . epidemiological and clinical data were collected through interviews and reviews of medical records between january 13 and 25, 2014. all three cases and their relatives were interviewed by public health staff to record their exposure history during the two weeks before the onset of symptoms, to validate the timeline of events and to identify close contacts. respiratory tract samples were collected from the index case, case 2, and case 3, on january 18, 22, and 23, respectively. environmental samples were collected from the lbm (a1 market) and the secondary wholesale markets (b1, c1, and d1 markets) and from a neighboring household where several chickens were bred. all samples were placed in sterile viral transport medium and shipped within 24 hours to the laboratory of zhejiang cdc at 4°c for h7n9 testing. viral rna was extracted using qiagen rneasy mini kit. real-time rt-pcr was used to detect influenza type a, subtype h7 and n9 using the protocol, specific primer and probe sets provided by china cdc [20] . specimens were also tested by rt-pcr for the presence of seasonal influenza virus (h1, h3, and b) and h5n1 virus. complete genomic fragments of the h7n9 virus were amplified directly from clinical samples, and sequencing was performed using an abi 3730xl automatic dna analyzer. the nucleotide sequences were determined by dideoxy sequencing using an abi prism bigdye terminator cycle sequencing kit as previously described [21] . nucleotide sequences were analyzed with the dnastar package (lasergene, madison, wi, usa). phylogenetic analysis was done by neighbor-joining method with mega (version 5.2). close contacts were placed under daily active surveillance for fever and respiratory symptoms, which was last for seven days after their last exposure to the h7n9infected case. close contacts were defined as individuals who had close contact (<1 meter) with any case without the use of personal protective equipment at any time before illnesses onset to the time of isolation of the case in hospital. antiviral chemoprophylaxis was neither recommended nor provided to contacts. following written informed consent, a structured questionnaire was used to gather demographic information and data on use of personal protective equipment, antiviral chemoprophylaxis, symptoms, and potential risk factors for h7n9 infection during the two weeks starting from their last exposure to h7n9-infected cases. respiratory specimens for h7n9 testing were taken from close contacts with a febrile respiratory illness occurred during the 7-day observation period. contacts were asked to provide a single convalescent serum collected ≥ 3-4 weeks after their last exposure to a case with h7n9. h7n9 serological testing was done by hi assay using a modified horse red-blood-cell assay, recommended by the who. the antigen used for the hi assays was the a/zhejiang/1/2013(h7n9) strain. a hi titer ≥ 1:40 in single serum sample and a four-fold or greater rise in titer in paired sera was defined as seropositive. the index case, a 49-year-old farmer with type ii diabetes, taking antidiabetic drugs for one year, had been unwell since january 13, 2014, with fever (39.6°c) and flank pain. after consulting a health care clinic (a1 clinic) on january 14 and 15, he was treated as an outpatient with ciprofloxacin and intravenous amoxicillin/ clavulanate potassium. on january 16, he made a further consultation at a local hospital (b1 hospital) owing to persistent fever. chest radiography showed a leftlower-lobe pneumonia; meanwhile, treatment with ciprofloxacin was continued. peripheral blood cell count was normal. on january 17, index case's condition was worsened and again medical advice was sought; therefore, index case was admitted to a different hospital (hospital c1). upon admission in hospital c1, he had severe leucopenia, lymphopenia and thrombocytopenia (table 1) . he was diagnosed with community acquired pneumonia with a left pleural effusion. on january 18, he consulted at hospital d1 (a more advanced hospital) where a sputum sample was collected and sent to zhejiang cdc for microbiologic testing. on january 19, h7n9 virus-specific rna was detected by rt-pcr (ct value 29) in the sputum sample. once the h7n9 virus infection was confirmed, the patient was transferred from hospital c1 to d1 immediately. at hospital d1, he was isolated in a single room, where he was intubated, mechanically ventilated and commenced on oseltamivir (75 mg, twice daily by nasogastric tube) and peramivir (600 mg, once daily, intravenously). on january 20, the patient died of acute respiratory distress syndrome (ards) and multi-organ failure (table 1, figure 1 , figure 2 , and additional 1: figure s1 ). case 2 (index case's daughter, figure 1 ), a 24-year-old female with no underlying diseases, developed a throat sore and cough on january 20, the day her father died. she initially consulted the healthcare clinic in hospital a1 due to constant fever on january 22, where she was treated with antibiotics (amoxicillin) and then transferred to hospital d1 for further examination, where sputum and throat swab samples were taken. rt-pcr was conducted on the samples of sputum and throat swabs on the january 23, and the influenza a/h7n9 specific rna (ct value 34) was positive. afterwards, case 2 was admitted directly to an isolation room at the hospital d1 and commenced on oral oseltamivir (75 mg, twice daily), and intravenous peramivir (300 mg, once daily). on admission her peripheral blood count, serum blood biochemistry, and chest ct scan were normal (table 1) . she was given supplemental oxygen via nasal cannula with a flow rate of 1-3 l/min, whereas her oxygen saturation was 99%. her condition remained stable, and symptoms were improved during hospitalization. later, she was completely recovered and was discharged on february 2 after sputum samples tested negative for h7n9 rna by rt-pcr on january 30 and february 1 (table 1, figure 1 ). case 3 (index case's wife and case 2's mother), a 43year-old female farmer, with no underlying diseases, developed an acute cough with expectoration on january 23. she attended the hospital d1 where a throat swab was collected and an rt-pcr assessment was conducted on the throat swab sample, which was positive for h7n9 (ct value of 37). she was admitted to the hospital d1 on january 23. although chest radiography was normal, she was treated empirically with oral oseltamivir (75 mg, twice daily) and intravenous peramivir (300 mg, once daily). results of peripheral blood cell count, serum electrolytes, renal and liver function, and coagulation profiles were normal. arterial blood gas results were normal while the patient was breathing room air. the case remained stable during her admission and then she was discharged on february 5 after sputum tested h7n9 negative by rt-pcr on february 4 (table 1, figure 1 ). the husband of case 2, who had been in close contact with the index case and case 2, had no respiratory symptoms, and throat swabs and paired serum samples were negative ( figure 1 ). rt-pcr-positive throats swabs or sputum samples were obtained on 5 days, 2 days and 1 day of illness for the index case, case 2, and case 3, respectively. from these samples, four complete full genome sequences were amplified. sequence analyses indicated that the four isolates were highly homologous the other h7n9 strains previously identified in shanghai, jiangsu, anhui province, and with candidate vaccine strains (sharing 99.61 00% identity in amino acid sequences of all 8 segments). the four sequences from the three confirmed h7n9 case shared 96.4~99.6% homology with the animal isolates (a/chicken/zhejiang/sd019/2013), and phylogenetic analysis showed that the four isolates were almost genetically identical to other h7n9 virus isolated from the other provinces and chickens. furthermore, amino acid analyses showed that the ha gene of all four strains possessed the mutation 226 l, indicating high affinity to human receptor alpha 2-6 sialic acid receptors. it showed that the four isolates were entirely of human origin, and na protein possessed amino acid sequences associated with susceptibility to neuraminidase inhibitor antiviral drugs (h294 and e120 and h276 in na). the 8 fragments isolated from the index case, and two secondary cases were identical except for three non-synonymous amino substitutions identified in the index case. these were g574a nucleic acid substitution (aa mutation v192i) in the ns gene, g1095a nucleic acid substitution in the pb1 gene (nonsense amino mutation) and c839t nucleic acid substitution (v280a) in the np gene. (figure 3 , additional file 1: figure s2 , table 2 and table 3 ). the index case, his daughter (case 2), his wife (case 3), and his son-in-law (the husband of case 2) lived in separate rooms of one large house with three floors. there were no domestic animals and birds within the home or in the immediate vicinity of the home. however, two neighboring families located 100 meters and 500 meters respectively from the cases' homebred ducks and chickens, and there were a several free-range domestic poultry in the village. the index case and case 3 worked in the lbm (a1 market), selling vegetables and bird eggs between 4 am and 7 pm during the two weeks prior to the illness onset in the index case. furthermore, two weeks before the illness onset, the index case had visited a wholesale lbm (d1 market) to buy vegetables twice per week (each time he stayed there for 3 hours). the last known exposure date of the index case in a1 market was january 12 (15 hours), and the last exposure date of case 2 to a1 market was on january 17 for around three hours. in total. case 3 had been exposed to the live bird market on three occasions for a total of 22 hours from january 17-20 as follows: case 3 visited d1 market for a total of three hours between january 16-18 and she worked in a1 market for 8, 10 and, 1 hours on january 17, 18, and 20, respectively. the index case became ill on january 13 and was admitted to hospital on january 17. between january 13 and 17, cases 2 and 3 lived together with the index case in one house. furthermore, case 3 and index case had very close contact between january 13 and 17, sleeping together in one room. case 2 had three hours faceto-face contact with the index case on january 14. on january 17, case 2 provided bedside care in the hospital for the index case for approximately 12 hours. between january 17 and 19, cases 2 and 3 provided bedside care to the index case in hospital without any personal protective equipment for approximately 30 hours and 7 hours, respectively, including washing, cleaning his body, change his clothes and disposing urine and feces of the index case. during this period, the index case had high fevers (39.9°c), frequent coughing, and extensive sputum production. after the index case had been confirmed h7n9 infection on january 19, he moved to icu for treatment and isolation. cases 2 and 3 visited the index case for four hours on january 19, wearing facemasks. case 3 had frequent close contact with case 2 during the funeral ceremony of the index case on january 21-22. case 3 visited case 2 on the january 23 when case 2 was hospitalized with mild symptoms; case 3 wore a facemask during this visit. a summary of the cases' exposure to each other was shown in table 4 . the results of rt-pcr assay of environment samples were listed as follows: 4 swabs of chicken and duck eggs from the index case working site (a1 market) were h7n9 negative; 3 of 5 environmental samples from secondary live wet market (b1 market, a wholesale for a1 market) were positive for h7n9; 1 of 2 sewage samples from c1 market (located nearby b1 market) were positive for h7n9; 10 of 20 environmental samples taken from d1 market through routine avian influenza surveillance were h7n9 positive; 12 environmental samples from the area where neighbors were breeding poultry were all h7n9 negative; 11 of 26 environmental samples from different live birds markets under routine surveillance in xiaoshan district were h7n9 positive during january 2014 (source: unpublished data from the zhejiang avian surveillance system, additional file 1: figure s3 ). none of the 25 close contacts developed acute respiratory symptoms during the seven days surveillance period. throat swabs collected from all twenty-five close contacts on january 24 were negative for influenza a/ h7n9 virus by rrt-pcr, and all serum samples tested negative for h7n9 antibodies (titer < 1:40) by microneutralization and horse red-blood-cell hi assays (see table 5 ). no close contacts were reported taking oseltamivir chemoprophylaxis. here we describe a family cluster of three confirmed cases of h7n9 virus infection, involving a fatal index case, his wife and daughter (both survived). the index case presented with severe pneumonia and died of ards and multi-organ failure. the presence of chronic diseases has been associated with an increased risk of hospitalization with h7n9 virus infection [22] , and the index case had pre-existing diabetes, which requires oral anti-diabetic medication. another factor that may have played a role in the severity of disease was the late diagnosis of h7n9 virus infection and the late commencement of anti-viral therapy. the efficacy of neuraminidase inhibitors (nais) in reducing the risk of mild influenza infection progressed to severe illness has not been fully assessed in randomized controlled trials; however, observational data suggest that early treatment with nais of hospitalized patients with influenza infection is associated with better outcomes [23] . the other two cases i308v 308 i i i v i i i i i t618k 618 t t t t t t t t t single letters refer to the amino acid (aa) found in the noted protein at a specific site. *the numbering starts with the first condon of methionine for these proteins. were previously reported healthy, and presented with lower viral loads and mild symptoms that did not progress. both patients received early antiviral treatment, but it is not possible to determine whether the lack of clinical progression was result from antiviral treatment or as a consequence of a naturally indolent course [24] [25] [26] . since there were no functionally important differences in the genotype of the virus infecting the three cases, viral virulence is not likely to contribute the differential severity. who evaluates all clusters of human cases of nonseasonal influenza virus to determine whether humanto-human transmission or common exposure to infected animals or contaminated environments may have occurred [19] . the homology of all eight gene segments was between 99.6~100%, suggesting it was either a common source exposure or a person-to-person transmission. whilst all three individuals were exposed to potentially contaminated market environments within a putative maximum incubation period of 7 days, case 2 and case 3 had extensive unprotected exposure to the index cases when he was ill. we believe that most likely explanation for this family cluster is that the index case was infected from the live bird market, and the virus was transmitted directly from the index case to his daughter and his wife. several reasons could explain for this conclusion, as follows: (1) 7 days prior to illness onset in the index case, he had not been in contact with any people with a febrile illness and other confirmed cases, but was frequently exposed to the a1 live bird market for 9 hours daily and to the d1 secondary live bird market. although the a1 market was h7n9negative based on the environments samples collected on january 24, 2014, the samples from wholesale market b1 that supplied a1 market were h7n9 positive. furthermore, 42.30% (11/26) of environments samples from different live bird markets under routine surveillance in xiaoshan district during the same period were h7n9 positive (source: unpublished data from the zhejiang avian surveillance system); (2) case 2 stayed with the index case and provided beside bed medical care frequently on the january 14, 16, and 17-19. she had close unprotected contact with the index case for cleaning and washing his body on january 18 without any personal protection when the index case had severe symptoms such as high fever and cough. although case 2 had visited the a1 live bird market for three hours in three days prior to her illness onset, she reported no direct contact with live birds or poultry products. (3) there were multiple potential sources of infection for case 3, including the index case, the live market a1, and case 2. however, the index case and case 3 shared the same room every day and worked closely together after the illness onset in the index case. most importantly, case 3 provided beside bed care to the index case including washing his body, dealing with his secretions, and changing his clothes for him, without any personal protective equipment. the day numbers between the onset of illness in the index case and the onset of illness in the secondary cases (the serial interval) was 7 and 10 days, [27, 28] . furthermore, sequence analysis showed that four strains isolated from the three cases were genetically similar to each other. all four isolates possessing amino acids q226l and g228s in the ha segment were associated with increased affinity for human receptors (α-2, 6linked sialo-saccharides) [29] . virus from all three cases possessed p42s in ns and e627k and d701n in pb2 (which were associated with increased virulence in mice) and i368v and h99y in pb1 (which was associated with aerosol transmission of avian virus between ferrets) [7, 8] . there were only two amino differences (v192i in ns and v280a in np) between the virus infecting the index case and the secondary cases. those two mutations are not associated with any known functional change. therefore, field investigation and h7n9 full genomics analyses supported the secondary cases acquired infection most likely from the index case. person-toperson transmission of h7n9 has been reported [30, 31] . previous animal experiments (ferrets, mice, and pigs) also indicate that h7n9 virus possess the capability to bind to both avian and human receptors and it might be transmissible by respiratory droplets under certain conditions [12, 15] . our findings indicate that the virus has not gained the ability for efficient sustained transmission from person to person [12] . in this study, four close contacts and 21 frequent contacts were negative for h7n9 infection by hi testing and rt-pcr. although the husband of case 2 had close contact with the index case, case 2, and case 3 without any personal protective equipment, he showed no evidence of infection with the h7n9 virus. there were several limitations in this paper. firstly, h7n9 positive samples in environmental or bird samples were not found from a1 live bird market where the index case and case 3 were working. secondly, the full genetic sequence of h7n9 virus detected in the environment and live birds could not obtained. thus it is not able to compare human, avian, and environmental strains. on the basis of experiences of controlling of h5n1 and h7n9 virus, continued risk assessment, surveillance, and vigilance are required. a high degree of clinical awareness is necessary for people with possibility of h7n9 infection, especially for health workers who are occupationally exposed to poultry and for people with respiratory illness following recent contact with live poultry or live bird markets [32, 33] . here we report a largest size of the family cluster with confirmed h7n9 in china, in which the index case was fatal while the secondary cases were mild. in the term of an infectious source, the index case was infected from a live bird market while the index case infected the two secondary cases during unprotected frequent exposure. this family cluster supported that the transmission of avian influenza a/h7n9 was limited and not sustained. all of h7n9 referred isolates in additional materinals were download from genbank (http://www.ncbi.nlm.nih. gov/nuccore/?term=h7n9++and+china) and the global initiative on sharing all influenza data (gisaid) (http://platform.gisaid.org/epi3/frontend#50dda5). additional file 1: figure s1 . family pedigree showing three h7n9 affected individuals and their close contacts. figure s2 . phylogenetic analysis of six segments (mp, np, ns, pa, pb1, and pb2) from the four h7n9 isolates in three confirmed cases of a family cluster in hangzhou, zhejiang province, china, in january of 2014. figure s3 . all authors have declared: no support from any organization for the submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. authors' contributions hd, yc, zy, pwh, ec, and hy designed the study. fw, jh, and xy conducted the field investigation and analyses. hm, sq and cc collected and tested the samples, performed and sequence analyses. sl and pwh wrote the first draft and all authors contributed to review and revision of the report. ec and hy are guarantors. all authors read and approved the final manuscript. human infections with avian influenza a(h7n9) virus h7n9: preparing for the unexpected in influenza writing committee of the second world health organization consultation on clinical aspects of human infection with avian influenza av pandemic characteristics and controlling experiences of influenza h1n1 virus 1 year after the inception in hangzhou amino acid substitutions in polymerase basic protein 2 gene contribute to the pathogenicity of the novel a/h7n9 influenza virus in mammalian hosts dynamic reassortments and genetic heterogeneity of the human-infecting influenza a (h7n9) virus human infection with avian influenza a(h7n9) virus re-emerges in china in winter live-animal markets and influenza a (h7n9) virus infection pathogenesis and transmission of avian influenza a (h7n9) virus in ferrets and mice limited airborne transmission of h7n9 influenza a virus between ferrets novel avian-origin human influenza a(h7n9) can be transmitted between ferrets via respiratory droplets the k526r substitution in viral protein pb2 enhances the effects of e627k on influenza virus replication environmental connections of novel avian-origin h7n9 influenza virus infection and virus adaptation to the human epidemiology of human infections with avian influenza a(h7n9) virus in china family outbreak of severe pneumonia induced by h7n9 infection surveillance of the first case of human avian influenza a (h7n9) virus in beijing probable person to person transmission of novel avian influenza a (h7n9) virus in eastern china, 2013: epidemiological investigation one family cluster of avian influenza a(h7n9) virus infection in characterization of h7n9 influenza a viruses isolated from humans infectivity, transmission, and pathology of human-isolated h7n9 influenza virus in ferrets and pigs human infection with a novel avian-origin influenza a (h7n9) virus comparison of patients hospitalized with influenza a subtypes h7n9, h5n1, and 2009 pandemic h1n1 determinants of antiviral effectiveness in influenza virus a subtype h5n1 risk assessment on the epidemics of human infection with a novel avian influenza a (h7n9) virus in jiangsu province global concerns regarding novel influenza a (h7n9) virus infections analysis of the clinical characteristics and treatment of two patients with avian influenza virus (h7n9) comparative epidemiology of human infections with avian influenza a h7n9 and h5n1 viruses in china: a population-based study of laboratory-confirmed cases probable longer incubation period for human infection with avian influenza a(h7n9) virus in jiangsu province receptor binding by an h7n9 influenza virus from humans probable person-to-person transmission of avian influenza a (h5n1) three indonesian clusters of h5n1 virus infection in 2005 genomic signature and protein sequence analysis of a novel influenza a (h7n9) virus that causes an outbreak in humans in china human infection with avian influenza a h7n9 virus: an assessment of clinical severity we thank all of staff at zhejiang provincial and hangzhou municipal cdc, xiaoshan district cdc, for their help in field investigation and collection of environmental samples. the views expressed are those of the authors and do not necessarily represent the policy of the china cdc. note: data are median (iqr) or n (%). *including direct contact (touching), preparation, cooking, and consumption of well-appearing poultry. key: cord-002972-ge7qt256 authors: torner, núria; martínez, ana; basile, luca; mosquera, mmar; antón, andrés; rius, cristina; sala, m. rosa; minguell, sofia; plasencia, elsa; carol, mónica; godoy, pere; follia, núria; barrabeig, irene; marcos, m. angeles; pumarola, tomàs; jané, mireia title: descriptive study of severe hospitalized cases of laboratory-confirmed influenza during five epidemic seasons (2010–2015) date: 2018-04-14 journal: bmc res notes doi: 10.1186/s13104-018-3349-y sha: doc_id: 2972 cord_uid: ge7qt256 objective: the plan of information on acute respiratory infections in catalonia (pidirac) included the surveillance of severe hospitalized cases of laboratory-confirmed influenza (shclci) in 2009. the objective of this study was to determine the clinical, epidemiological and virological features of shclci recorded in 12 sentinel hospitals during five influenza seasons. results: from a sample of shclci recorded during the 5 influenza epidemics seasons from 2010–2011 to 2014–2015, cases were confirmed by pcr and/or viral isolation in cell cultures from respiratory samples. a total of 1400 shclci were recorded, 33% required icu admission and 12% died. the median age of cases was 61 years (range 0–101 years); 70.5% were unvaccinated; 80.4% received antiviral treatment (in 79.6 and 24% of cases within 48 h after hospital admission and the onset of symptoms, respectively); influenza virus a [37.9% a (h1n1)pdm09, 29.3% a (h3n2)] was identified in 87.7% of cases. surveillance of shclci provides an estimate of the severity of seasonal influenza epidemics and the identification and characterization of at-risk groups in order to facilitate preventive measures such as vaccination and early antiviral treatment. electronic supplementary material: the online version of this article (10.1186/s13104-018-3349-y) contains supplementary material, which is available to authorized users. influenza is an infectious disease affecting mainly upper respiratory tract worldwide. influenza virus causes between three and five million severe cases and an estimated 250,000-350,000 deaths annually. in the european union, there are between 40,000 and 220,000 annual deaths attributable to influenza. however, mortality is only the tip of the iceberg in terms of the disease burden, since influenza also causes a decrease in functional status and increased dependency in the elderly [1] . estimating the burden of disease caused by influenza is difficult because many cases do not require medical care, or no confirmatory laboratory tests are widely performed to all influenza like illness' cases [2, 3] . in catalonia, influenza surveillance is conducted through the plan of information on acute respiratory infections in catalonia (pidirac) based on the network of sentinel physicians, who provide information on patients with influenza symptoms [4] . given the situation generated by the 2009 pandemic caused by the new influenza a (h1n1) pdm09 virus, the pidirac sentinel network included surveillance of severe hospitalized cases of laboratory-confirmed influenza (shclci) to assess severity. the pidirac sentinel surveillance network has a primary care sentinel network made up by 60 gps and pediatricians who inform on a daily basis of all ili attended and perform sampling of respiratory swabs for confirmation. this information allows to plot weekly ili incidence and 12 sentinel hospital facilities who notify on a weekly basis all influenza confirmed cases that meet the ecdc definition for severe influenza [5, 6] . this surveillance allows the clinical and epidemiological characteristics and risk factors associated with greater severity to be determined, and the emergence of influenza virus strains with clinical characteristics and behaviours outside the normal range to be detected, in order to correctly prioritize and direct preventive and control measures during the influenza season [7] . the aims of shclci surveillance are to provide an estimate of the severity of seasonal influenza epidemics to identify and characterize the risk groups that may present serious complications as a result of infection by circulating influenza viruses or their association with some underlying diseases and to identify the virological characteristics of viruses associated with these severe cases, such as genetic changes and/or antigenic changes that lead to increased virulence. the aim of this study was to describe the clinical, epidemiological and virological characteristics of shclci based on data collected in five influenza seasons in catalonia. epidemiological surveillance of severe cases of influenza in catalonia during five epidemic influenza seasons (2010-2015), beginning on week 40 of the season until week 20 of the following year, with the recording by twelve hospitals (covering 95,3% of the population) from the pidirac sentinel network of shclci reported to the epidemiological surveillance units corresponding to each hospital [5, 7] . shclci cases were cases with previous influenza like illness symptoms (sudden onset of symptoms and/ or fever; malaise; headache; muscle pain; and/or cough; sore throat; shortness of breath) who presented to a hospital facility and complyed with shclci case definition. shlcic was defined as a severe case of laboratory-confirmed influenza due to the influenza virus (a, a (h1n1)pdm09, b, c) that required hospitalization because of pneumonia, septic shock, multiorgan failure or any other severe condition, including icu admission or who developed clinical signs during hospitalization for other reasons. influenza diagnosis was confirmed by polymerase chain reaction (pcr) and/or culture of nasopharyngeal swabs. respiratory tract samples were processed within 24 h of receipt at the laboratory. a 300 μl aliquot was taken for total nucleic acids extraction and eluted in 25 μl of rnase-free elution buffer using the automatic qiasymphony system (qiagen, hilden, germany) according to the manufacturer's instructions. subsequently, two specific one-step multiplex real-time pcr was carried out using the stratagene mx3000p qpcr systems (agilent technologies, santa clara, ca, usa), were used for typing a/b influenza virus and subtyping influenza a virus [8, 9] . for each reported case, an epidemiological survey was made to collect anonymized demographic variables (age and sex); risk factors; icu admission; day of onset of symptoms, of hospital admission and discharge; vaccination history; influenza virus type and subtype; and outcome at hospital discharge. epidemiological survey was conducted by preventive medicine physician from data in medical history registry and public health epidemiologist in charge. we studied all data on shclci from five influenza seasons in pidirac sentinel network hospitals and made a comparative analysis of viral types and subtypes. the strain identified in > 50% of cases in each season was considered the predominant strain. duration of hospital stay was divided into two categories < 10 days and 10 days or more. the statistical analysis was made using the chi square test and student's t test with 95% confidence intervals (ci) for continuous variables and the anova test for categorical variables. during the 2010-2015 seasons 1400 cases of shclci were recorded, 462 (33%) required icu admission and 167 (12%) died: 778 (55.6%) were male. the median age was 61 years (range 0-101 years-mean 55.2 (sd 26.7 years). the most-affected age group was the ≥ 65 years age group with 633 cases (45.2%) ( table 1 ). the median age of the ≥ 65 years age group was 79 years (range 65-101) and the mean age was 78.7 years (sd 7.8 years): 296 (47%) were aged ≥ 80 years. of deaths, 111 (66.5%) occurred in patients aged ≥ 65 years and 55 (33.3%) in patients aged > 80 years ( table 1) . the distribution by type of influenza virus was: 87.7% (1228) influenza virus a, 531 (37.9%) of which corresponded to the a (h1n1)pdm09 subtype and 410 (29.3%) to a (h3n2), and 20.5% to influenza a that remained unsubtyped: 172 (12.3%) of cases were influenza b (additional file 1). there were significant differences in the mean age of cases according to the virus type, with a higher prevalence of virus a (h3n2) in older patients and virus a (h1n1)pdm09 in younger patients with mean age of cases 66.9 and 46.8 years (p < 0.001) and those with death as outcome 78.8 and 60.2 years, respectively (p < 0.001) ( table 1) . in 1384 (98.9%) of shclci there was a known risk factor. the most prevalent risk factors were cardiovascular disease, chronic obstructive pulmonary disease and diabetes (25.5, 23.4 and 20.5%, respectively). the most prevalent complication was pneumonia in 992 (71.7%) cases, of these 304 (30.6%) presented bacterial superinfection. for cases with known immunization for influenza, 682/967 (70.5%) of cases were not vaccinated for the current season included in the study (missing data on vaccination status: 433 (31%). the age group with highest vaccine coverage was the older than 65 age group (57%) and cases with at least one risk factor had low vaccination coverage (20.5%). vaccine proved effective in reducing intensive care unit (icu) admission [or = 0.64 (95% ci 0.47-0.88) p = 0.003] ( table 2) . of the 21 pregnant women hospitalized as shclci, all were unvaccinated, 14 (66.7%) required icu admission, 19 (90.5%) received antiviral treatment and none of them had any underlying disease or risk factor other than pregnancy. the mean hospital stay was 13.8 days (sd 17.9) with a median of 9 days (range 1-374 days). the mean stay by age group was: 0-4 years 7. a total 1125 cases (80.4%) had information on antiviral treatment, 1113 (99%) received oseltamivir and 12 (1%) zanamivir. 863 of these cases (79.6%) received treatment in the first 48 h after admission. antiviral treatment administered before 48 h on admission was associated with a shorter length of stay (los) (or 0.25: ci 0.18-0.34, p < 0,001) nosocomial cases (41) were excluded from the analysis (table 3 ). after the 2009 influenza virus a(h1n1)pdm09 pandemic, among the lessons learned was the need to expand surveillance of seasonal influenza to include severe cases in order to determine the characteristics of shclci caused by seasonal influenza viruses circulating during each season. the results obtained by the pidirac sentinel surveillance system during five post-pandemic seasons underscore the importance of prevention by vaccination in order to avoid serious complications such as ards and icu admission of the most vulnerable persons, while showing the need for increased vaccination coverages in groups such as pregnant women, in whom the proportion of icu admission is 66.7% while vaccination is zero [7, 10] . in our study no significant differences between influenza a and b virus infections among hospitalized cases was observed, except for younger age for a (h1n1) pdm09 cases; similar results also found by other studies in the united states and australia [11] [12] [13] . although the number of hospitalizations associated with influenza a virus infections was greater than the number with influenza b virus infections this fact can be explained by greater prevalence of influenza a viruses circulating in the community during the seasons included in the study. the delay in the administration of antiviral drugs at symptom onset in people with an identified risk of complications, such as the elderly or people with medical conditions that worsen the prognosis of influenza or make a longer hospital stay likely, also demonstrates the need to confirm influenza in primary healthcare and administer treatment within 48-72 h for it to be effective. influenza remains an important global public health problem in spite of scientific evidence which support immunization to protect those at high risk for complications, such as the elderly [1] . predominant influenza type/subtype circulating each season, influenza vaccination policies and coverage, influenza vaccine strain match/mismatch and vaccine effectiveness significantly influence the % of hospitalised influenza cases and cfrs in all age groups, including older age groups. however, the high percentage of hospitalizations (45.2%) and mortality (17.5%) in the ≥ 65 years age group, especially in people aged > 80 years, where mortality is higher (33.3%), reflect the consequences of increased life expectancy. early administration of antiviral treatment has proven to diminish length of stay. healthcare providers should start antiviral treatment as soon as possible, before 48 h from onset of symptoms is the recommendation, [11] unfortunately this is not feasible. yet if treated as soon as patient is admitted to the hospital facility and influenza is confirmed, shorter length of stay and prompt recovery can be attained [12, 13] . this makes it necessary to deepen our knowledge of the effect of aging and its interaction with the most prevalent chronic diseases in the elderly and the immune response in order to implement preventive measures to provide better protection of this population group [11] . it is necessary to improve some surveillance aspects, especially with regard to data collection, in order to avoid a loss of information that makes some variables impossible to assess, such as risk factors such as smoking, which was not recorded in 91% of cases as well as lack of information on the vaccination status, which was more than 30% [14, 15] . a limitation to this study is that only shclci cases were recorded during the study period. this unables global hospitalization burden estimates caused by seasonal influenza nor the estimation of seasonal differences in vaccine effectiveness to prevent severity and death. the system identifies the epidemiological and virological characteristics of severe forms of influenza that show changes in their virulence, but comparison between severe and non-severe cases is not feasible. the proportion of shclci cases admitted to icu and cfrs are potentially higher than other surveillance systems that monitor all hospitalised cases of confirmed influenza. this is particularly evident with regard to pregnant women because of the small number of cases. yet, in all shclci surveillance provides an estimate of the severity of seasonal influenza epidemics, and provides ad hoc information to identify and characterize the groups at risk of complications and take appropriate preventive measures. abbreviations ards: acute respiratory distress; icu: intensive care unit; cfr: case fatality rate; ci: confidence interval; ecdc: european center for disease control; ili: influenza like illness; los: length of stay; or: odds ratio; pcr: polymerase chain reaction; pidirac: plan of information on acute respiratory infections in catalonia; sd: standard deviation; shclci: severe hospitalized cases of laboratoryconfirmed influenza. • fast, convenient online submission ready to submit your research ? choose bmc and benefit from: vaccine effectiveness in older individuals: what has been learned from the influenzavaccine experience influenza illness and hospitalizations averted by influenza vaccination in the united states predicting clinical severity based on substitutions near epitope a of influenza a/h3n2 public health agency of catalonia. department of health. pla d'informació de les infeccions respiratòries agudes a catalunya (pidirac) estratègia de vigilància dels casos greus de grip hospitalitzats vigilancia de casos graves hospitalizados confirmados de virus de la gripe the global influenza hospital surveillance network (gihsn): a new platform to describe the epidemiology of severe influenza global burden of respiratory infections due to seasonal influenza in young children: a systematic review and meta-analysis virological surveillance of influenza and other respiratory viruses during six consecutive seasons from severe influenza in 33 us hospitals, 2013-2014: complications and risk factors for death in 507 patients early administration of oral oseltamivir increases the benefits of influenza treatment effectiveness of antiviral treatment in preventing death in hospitalized cases of severe influenza over six influenza seasons increased antiviral treatment among hospitalized children and adults with laboratory-confirmed influenza estimating the burden of seasonal influenza in spain from surveillance of mild and severe influenza disease estimated influenza illnesses, medical visits, hospitalizations, and deaths averted by vaccination in the united states seasonal influenza (flu) cdc. centers for disease control and prevention, national center for immunization and respiratory diseases (ncird) the members of the pidirac working group for the surveillance of severe nt conceived and wrote the manuscript, am and mj reviewed the manuscript and cr, ib, nf, pg, ep, sm, ms, mc, mm1, mm2, aa and tp were involved in case management. all authors read and approved the manuscript. the authors declare that they have no competing interests. the raw data supporting this study are publicly available as additional file. not applicable. ethical approval was not necessary as the study uses routinely collected, anonymous surveillance data. the study was partially funded by agaur (agència de gestió d' ajuts universitaris i de recerca) grant 1403 and ciber epidemiologia y salud pública ciberesp and by fondo de investigación sanitaria pi 11/01864 and recercaixa 2010acuo_00437i of the catalan association of public universities (acup). springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. key: cord-034961-4lpjo9a5 authors: dos santos, bruno pereira; de gouveia, giovanna cristiano; eller, sarah; pego, ana miguel fonseca; sebben, viviane cristina; de oliveira, tiago franco title: is covid-19 the current world-wide pandemic having effects on the profile of psychoactive substance poisonings? date: 2020-11-08 journal: forensic toxicol doi: 10.1007/s11419-020-00558-3 sha: doc_id: 34961 cord_uid: 4lpjo9a5 nan recent times have shown an increasing number of intoxication cases, resulting in damaging public health issues all around the globe, and brazil is not an exception [1] . most of the cases observed denoted the use of psychoactive substances, such as recreational drugs and medication for psychiatric use [2] . currently, with the covid-19 pandemic, both strict lockdown and general social distance rules have been recommended. the application of such public health control measures resulted in personal behavior alterations, considering that, during quarantine, individuals find themselves deprived of any social interactions, which can ultimately modulate the profile of poisonings [3] . the present study seeks to verify possible temporal changes in the poisoning profiles of rio grande do sul state, southern brazil, due to the covid-19 outbreak. for this purpose, data was gathered from cases of intoxication by the five main classes of psychoactive substances: anticonvulsants, antidepressants, antipsychotics, benzodiazepines, and recreational drugs, arriving at the toxicological information center of rio grande do sul, from march to july 2019 and from march to july 2020 ( table 1 ). the data analyzed are comprised of a wide range of incidents, verified by the many variables obtained, such as age, gender, circumstance of exposure, and location of exposure. out of the 8588 cases attended by our team during the quarantine, a reduction of − 6% in comparison with the same period in the last year was observed; 3179 cases were included in the five classes of substances evaluated. as for the four classes of psychiatric drugs tested, their combined number decreased by − 8.4%, while illicit drugs and alcohol intoxications alone rose by + 41.4%. the reported numbers clearly indicated a change in poisoning profile as compared to 2019. within recreational drugs (alcohol, amphetamine, cocaine, ecstasy/mdma, inhalants, lsd, and marijuana), the highest incidence of cases were alcohol, from 94 to 137 cases (+ 45.7%) and cocaine, from 62 to 91 cases (+ 46.8%). a substantial increase was also found for other substances such as marijuana which rose from 17 to 27 cases (+ 58.8%) and ecstasy/mdma, from 5 to 9 cases (+ 80.0%). situations generated by the pandemic, such as stress, isolation, and financial insecurity can result in increased alcohol and drug consumption and misuse [4, 5] , thus explaining an expected rise in intoxication cases. unlike recreational drugs, a medication used for psychiatric treatment showed a decrease. the class of antidepressants presented an expressive reduction, of − 17.6%, followed by anticonvulsants (− 10.1%), benzodiazepines (− 2.1%), and antipsychotics (+ 1.0%), given that the last two maintained a certain stability in comparison with the remaining substances. within the class of antidepressants, the compounds with the highest incidence were fluoxetine, which decreased from 302 to 266 cases (− 11.9%); amitriptyline also showed a reduction from 255 to 204 cases (− 20.0%), and finally sertraline from 157 to 135 cases (− 14.0%) reported. as for anticonvulsants, these have shown a much lower absolute reduction, with carbamazepine, from 158 to 143 cases (− 9.5%), valproic acid, from 143 to 137 cases (− 4.2%), phenobarbital, from 47 to 43 cases (− 8.5%) and topiramate, from 46 to 34 cases (− 26.1%). as it can be seen, benzodiazepines showed much less decrease than antidepressants in absolute numbers during the interval studied of quarantine. clonazepam, the group's main representative, showed a decrease, from 749 to 686 cases (− 8.4%), while diazepam increased from 255 to 302 cases (+ 18.4%). the main variation observed with antipsychotics were chlorpromazine, with an increased from 167 to 171 cases (+ 2.4%) and risperidone, from 155 to 158 cases (+ 1.9%). within this scenario, the reduction in medication for psychiatric treatment was mainly guided by a decrease in suicide attempts reaching a decline of − 9.8%. as shown in table 1 , antidepressants presented a decrease of − 18.9% followed by anticonvulsants with − 14.2% and benzodiazepines with − 7.7% for suicide attempts. the analyzed data indicates a reduction on suicide attempt rates that can be correlated to a possible modulation within the intoxication profile of individuals due to social isolation, where people have been asked to stay confined within their homes with family and friends, which can be a protective factor associated with the occurrence of suicide attempts [6] . however, we observed an increase in the number of cases with recreational drugs as suicidal agents (2019, 101 cases; 2020, 138 cases). in addition, the cases of self-medication increased for all classes of medicines evaluated in this study. while quarantine can be effective in reducing the number of suicide attempts, the number of individual accidents has increased expressively (table 1) , mainly associated with recreational drugs, anticonvulsants, and benzodiazepines, especially accidents with children up to 5 years old. the age variable is one of the most relevant parameters, as it shows a clear tendency of who, in fact, is more vulnerable in these cases of intoxication. for instance, children aged 0-5 years showed a serious increase in cases of intoxication by recreational drugs, of + 275%, followed by more than + 30% in anticonvulsants and benzodiazepines. much likely, due to the implementation of the isolation measures, which involved the closing of schools, resulting in children staying home for a much longer period. according to the results obtained, the use of recreational drugs during quarantine has increased, as well as the number of accidents associated with children and with the same compounds. furthermore, individuals over 19 years old, showed a substantial increase in the consumption of recreational drugs, with + 48.0% and only + 3.0% in antipsychotics, as opposed to other classes which decreased, justifiable by a reduction in suicide attempts. considering the exposure site, cases at home had an expressive increase simply for recreational drugs. despite the possibility of an increase in home poisonings, this number has not changed significantly, as most cases already occurred at home. however, in both the site and circumstance variables, large numbers that stood out were the "unknown" results, increasing in most classes, which causes serious difficulties for the medical staff to deal with in cases of intoxication since these variables are of extreme importance for a rapid and effective medical intervention. another variable that has considerably changed was gender. within the recreational drugs group, for instance, cases involving women increased by + 80.3%, while cases with men rose with + 17.0%. nonetheless, other classes showed a reduction in the number of female attendances and, curiously, male attendances in benzodiazepines and anticonvulsants increased, even considering the general decline of the groups. in conclusion, we observed a change in the poisoning profile during quarantine, possibly due to the modification in habits and behavior of the population caused by social isolation. a significant increase in the incidence of cases by recreational drugs has been noted, in almost all aspects, while the cases involving medication for psychiatric use decreased, mainly because of the decline in suicide attempts, in a way as a possible protective effect. the statistics observed in 2020 contradicted our pre-pandemic period epidemiological data in 2019, indicating a strong influence of the quarantine. therefore, the results show how the environment and social activities can modulate human behavior and that some factors such as gender and age can influence the intoxication profile. these numbers are of paramount importance for toxicovigilance and can be used as a projection for other states and countries because measures to prevent poisoning cases during quarantine can be adapted to reduce the damage caused indirectly by the outbreak of covid-19. a fast and simple approach for the quantification of 40 illicit drugs, medicines, and pesticides in blood and urine samples by uhplc-ms/ms multivariate analysis applied in dataset of poison control center of são paulo control centre members, descatha a (2020) covid-19: home poisoning throughout the containment period [the corrected version first appears in lancet public health alcohol use and misuse during the covid-19 pandemic: a potential public health crisis? lancet public health 5:e259 the covid-19 pandemic and its impact on substance use: implications for prevention and treatment suicide mortality and coronavirus disease 2019-a perfect storm the authors would like to acknowledge the financial support from coordination of improvement of personal higher education -brazil [capes -finance code 001]. conflict of interest the authors declare that they have no conflict of interest.ethical approval this article does not contain any studies with human participants or animals performed by any of the authors. key: cord-201798-doi5w7tb authors: seto, christopher; khademi, aria; graif, corina; honavar, vasant g. title: commuting network spillovers and covid-19 deaths across us counties date: 2020-10-02 journal: nan doi: nan sha: doc_id: 201798 cord_uid: doi5w7tb this study explored how population mobility flows form commuting networks across us counties and influence the spread of covid-19. we utilized 3-level mixed effects negative binomial regression models to estimate the impact of network covid-19 exposure on county confirmed cases and deaths over time. we also conducted weighting-based analyses to estimate the causal effect of network exposure. results showed that commuting networks matter for covid-19 deaths and cases, net of spatial proximity, socioeconomic, and demographic factors. different local racial and ethnic concentrations are also associated with unequal outcomes. these findings suggest that commuting is an important causal mechanism in the spread of covid-19 and highlight the significance of interconnected of communities. the results suggest that local level mitigation and prevention efforts are more effective when complemented by similar efforts in the network of connected places. implications for research on inequality in health and flexible work arrangements are discussed. the coronavirus disease 2019 pandemic has dramatically impacted societies globally, with over 32 million confirmed cases and over 980,000 covid-19 deaths worldwide at the time of this writing 1 (hopkins, 2020) . consequently, a growing body of research seeks to understand the social and demographic predictors of this disease at the community level, identifying local etiological factors such as age structure , population density (sy et al., 2020) , and racial composition of the residents (millett et al., 2020) . in addition to local factors, equally important it is to understand the role of social contacts within and across communities, such as the extent to which the movement of people between communities facilitate the transmission of this infectious disease. one important type of such movement is commuting for work, a routine mobility activity that millions of people in the us engage in, typically on a daily basis (mckenzie, 2015) . many of the local and state level mitigation and prevention policies have involved some form of social distancing recommendations to "flatten the curve", in recognition that close physical proximity among people (in the regular course of their daily activities such as in the workplace, at church, or in school) can contribute significantly contributor to the spread of this disease. research on the transmission of this disease across space, between places such as work areas and residential areas is still in its infancy, yet important evidence is starting to emerge. for instance, (bai et al., 2020) analyzed inter-county commuting flows in the state of new york and found that "community spreader" counties were characterized by high commuting flows to and from other counties. these findings are consistent with prior research focused on the spread of other infectious diseases which finds that commuting is an important mechanism through which 1 as of 09.25.2020 diseases may be transmitted to new populations. for example, (xu et al., 2019) linked road traffic among chinese cities to the incidence of influenza a (h1n1) during the 2009 pandemic. understanding how exposures to coronavirus in an area's commuting network affects local cases and deaths is important in guiding thinking and policy in support of remote working schedule and other flexible work arrangements. because many types of jobs do not permit remote work, certain populations, often underpaid and socioeconomically vulnerable minority groups, are disproportionately affected both at work and at home by increased risk of exposure to this disease. moreover, these same groups are further disadvantaged disproportionately by school closures and the need to find alternative arrangements for the care of school age children and other dependents. we contribute to the extant literature on the social and spatial dynamics of covid-19 by analyzing population across united states (us) counties which we consider to be linked via a network of commuting ties. we assess the extent to which county rates of covid-19 deaths and cases are predicted by covid-19 cases in linked counties, controlling for relevant structural and sociodemographic characteristics and spatial contiguity. we leverage methodological strategies from computational statistics to assess model fit and estimate significance while accounting for spatial and network dependencies within the data. our findings demonstrate that commuting networks are an important determinant of the spread of covid-19, as measured by deaths and confirmed cases. we analyze a population of all us counties. data on total number of covid-19 confirmed cases and total number of covid-19 deaths are drawn from a database maintained by usa facts 2 , which is updated daily and contains counts by county and state. we utilize 3-level mixed effects negative binomial models, analyzing covid-19 cases and deaths of county-time periods (n=31,380), nested within counties (n=3,139), nested within states (n=51, includes dc). these models are implemented using the menbreg command in stata 16 (statacorp, 2019). negative binomial models are well suited to predicting overdispersed count outcomes (osgood, 2000) , making them well suited to this research application. we incorporate state-level random intercepts to account for cross-state variation in covid-19 outcomes which may have been driven by statelevel policy differences (e.g., different masking requirements and enforcement of business lockdowns) and county-level random intercepts to account for unmeasured variation across counties in covid-19 susceptibility and response. county-time periods, our first-level units of analysis, are based on the number of new covid-19 cases and deaths for a given county within a given two weeks. within each county are nested ten of these county-time periods, ranging from april 1st to august 18th, 2020. we use total county population (based on the 2018 american community survey 5-year population estimates) as an exposure term for all models, making the model coefficients interpretable as population rates. all models are estimated using huber-white robust standard errors. as a result of the network and spatial interdependencies which we hypothesize to exist among counties, conventional, analytic tests of statistical significance may fail to produce accurate confidence estimates (lesage, 2015) . instead, we utilize a permutation testing, a flexible, simulation-based approach (breiman, 2001; graif et al., 2019) . for each predictor, we conduct 100 permutations in which the values of the predictor are randomly permuted across all observations, breaking any association with covid-19 mortality rates. each permuted dataset is used to calculate model error, generating a distribution of what model error would look like if the predictor had no effect. the observed error is then compared to this distribution in order to assess the contribution which the predictor makes to model fit. a relatively low proportion of permuted cases which produced a lower error than that which was observed shows a significant contribution to model fit. in these permutation tests, we use mean arctangent absolute percentage error (maape) to capture average model error. maape is computed by averaging the arctangent of the ratio of error to observed value for each observation, as shown in equation 1. maape has the advantage of capturing error as a percent, making it less sensitive to outliers than mae, while also being robust to observations for which the true value of y is 0 (an advantage over mape) (kim & kim, 2016) . (1) we used data on intercounty commuting, used to construct a weighted average of networklagged covid-19 exposure, were drawn from the lehd origin-destination employment statistics (lodes) dataset, which is publicly available from the u.s. census bureau (us census 7 of 20 -lehd) (graif et al., 2017; kelling et al., 2020) . this measure was created according to equation 2, where a given home county (h) is connected to w work counties. ℎ− represents the number of commuters from county h who commute to county w, while ℎ− represents the total number of outgoing commuters from county h. an additional measure of covid-19 exposure was also created based on spatial proximity using average rate of confirmed cases of all (queen) contiguous counties (e.g., equation 3 for a county which borders b counties). we incorporate a temporal lag into the construction of these measures by using cases from the prior two-week time period. we also incorporate measures capturing network and spatial change in covid-19 cases from the prior to current time period which use identical weighting (equations 4 and 5). finally, we control for each county's own covid-19 case rate during the prior 2 weeks. we used the rubin-neyman causal inference potential outcomes framework (rubin, 2005) to estimate the causal effect of each of the county-level characteristics, including economic disadvantage, percentage of population over the age of 65, etc. (see table 1 for details), on the number of deaths by covid-19 in that county. to estimate the causal effects, we applied the wellestablished weighting procedure in causal inference by following a two-step mechanism: first, we weight each data sample so as to adjust for the effect of confounding and generate a weighted population that we can consider "as if randomized." second, we perform a weighted regression where we regress total number of deaths by covid-19 against county-level characteristics. we repeated this two-step procedure for each county-level characteristic separately, each time designating a characteristic as "treatment," and estimated the causal effect of that characteristic on total number of deaths by covid-19. we used the following state-of-the-art weighting methods for causal inference from observational data. each of the three methods that we applied use a different methodology for computing the weights (in the weight model). (i) covariate balancing propensity score weighting (cbpsw): propensity score is defined as the probability of receiving the treatment given the covariates and is used in estimating the causal effect of binary treatments on outcomes. propensity density is its counterpart for coping with continuous treatments. cbpsw was recently proposed and it estimates the weights based on the propensity score (for binary treatments) and propensity density (for continuous treatments) while maximizing covariate balance between the treated and controlled via an additional balancing constraint in the optimization (fong et al., 2018; ratkovic, 2014) (ii) inverse probability of treatment weighting: this method weights each of data samples proportionate to the inverse of the propensity score (robins et al., 2000) . (iii) super learner: this method offers a doubly robust estimate of causal effects computed through an ensemble of propensity score estimators (pirracchio et al., 2015; van der laan et al., 2007) . the methods that we have used have been shown to be reliable, effective, and efficient in estimating causal effects from observational data in various applications , 2020 . the weighted outcome regression model determines the causal effect of each county level characteristic on deaths by covid-19 through statistical hypothesis testing. we tested for the null hypothesis that each such causal effect is zero. a statistically non-significant p-value would determine a non-significant causal effect. a statistically significant p-value shows a significant causal effect and the degree (and sign) of the causal effect is determined by the magnitude (and sign) of the estimated coefficient for the treatment in the outcome regression model. we used 0.05 as the statistical significance level. research has indicated that communities that have lower socioeconomic status can have more preexisting health conditions, lower access to healthcare, lower access to high-speed internet that could enable remote work, and are less able to engage in social distancing during the covid-19 pandemic (chiou & tucker, 2020; weill et al., 2020) . for these reasons, several sociodemographic controls are included in the analyses. these measures were drawn from the 2014-2018 american community survey (acs) 5-year estimates. note that these measures are county-level attributes, i.e., considered to be invariant over time for the two-week time periods defining the level-one units. economic disadvantage was measured as the first principal component produced following an analysis of unemployment rate, median income, percent in poverty, percent female-headed households, percent college graduates, percent owner-occupied housing units, and percent vacant housing units (eigenvalue = 3.2). we also include the percent of residents of 65 years or older, as well as binary indicators of whether the county is above average regarding (1) percent non-hispanic white, (2) percent non-hispanic black, and (3) hispanic. finally, we include a measure of the percent of the county with urban residence, as measured in 2010. table 1 shows descriptive statistics for all measures described above. table 2 displays coefficient and standard error estimates from multilevel negative binomial models predicting total deaths and total confirmed cases using the full analytic sample. results from these models are consistent with prior literature and theoretical expectations. as shown, the commuting network-based measures are robust predictors of both total deaths and total cases. this is true for both the network measure based on confirmed cases at the prior time period and the network measure capturing change in cases from the prior time period. note that, when these network measures are taken into account, spatial contiguity is not a strong predictor of covid-19 spread across counties. estimated coefficients for other measures are also consistent with extant research and our expectations. economic disadvantage, concentration of racial/ethnic minority groups, and urban population are associated with higher rates of covid-19 cases and deaths, while a higher concentration of non-hispanic white population is associated with lower covid-19 cases and deaths (tai et al., 2020) . population percent aged 65 years and older is negatively associated with cases, but positively associated with deaths (le couteur et al., 2020) . as expected, a county's case rate at the prior time point is a strong predictor of cases at the current time point. the permutation test results shown in table 3 provide further support for these findings. the p-values shown in table 3 we conducted several tests to assess how findings changed with alternative model specifications. in order to better separate network effects from possible unmeasured spatial confounders, we re-estimated the models using network measures based on (1) only contiguous counties and (2) only non-contiguous counties. tables 4 and 5 show estimates from these models (respectively). as shown, the network effects persist in both cases, further supporting our finding that commuting networks matter for the spread of covid-19 beyond spatial proximity. notes: exposure = county race-specific population 2014-2018, acs 5-year estimates; nhb = non-hispanic black; ***p < .001; ** p < .01; * p < .05; † p < 0.10 to aid our causal inference, we also conducted several analyses using different weighting strategies on a cross-sectional version of our data in which outcomes are cumulative counts of a county's cases or deaths, and network and spatially lagged measures are based on these cumulative counts. results from these models are shown in table 6 . as shown, these alternative model specifications produced substantively similar results with regard to the commuting network effects, offering further support for our conclusions. results of all of the causal effect estimators consistently show that the percentage of population over the age of 65 and economic table 5 . negative binomial models (with state and county random intercepts) predicting covid-19 outcomes across 10 time periods based on network, spatial, and time lagged cases. network based on only non-contiguous counties. total confirmed cases beta se beta se network lagged confirmed case rate (tn-1) .0020 *** (.001) .0020 *** (.001) δ network lagged confirmed case rate (tn-1 -tn) .0004 (.000) .0029 *** (.000) spatially lagged confirmed case rate (tn-1) .0013 *** (.000) .0019 *** (.000) δ spatially lagged confirmed case rate (tn-1 -tn) .0006 *** (.000) .0022 *** (.000) confirmed case rate (tn-1) .0019 *** (.000 given the growing research suggesting that vulnerable populations are less able to work remotely and engage in physical distancing during this pandemic, our results also indicate the acute need for work level protections, such as providing paid sick days, increasing minimal wages, providing health safety equipment to essential workers, to assisting with childcare for working parents who have to work while the schools are closed or in remote mode. these necessary provisions will not only help save the lives and health of workers who cannot afford to socially distance themselves from their work environments, but they have the great potential to spillover and improve the fates of whole communities that their workers go back home to. as expected, an area's socioeconomic disadvantage contributed to both higher death rates and cases relative to the local population. the area's concentration of whites was associated with a protective effect against both infection cases and covid-19 deaths. the concentration of minorities, both above average share of hispanics and non-hispanic blacks was associated with higher rates of confirmed cases, consistent with a large body of work that has documented the many challenges associated with covid-19 risk that burden minority communities, including the higher likelihood to be in frontline occupations and in other low paid occupations that have little flexibility and cannot be easily be transitioned to remote work format. 3 these data come with several limitations. the analytical focus was on counties in part due to restrictions regarding the covid-19 data availability across the country. to the extent that the data access and granularity expands in future months, analyses at more local levels will be very valuable. still, analyses on other important transmittable diseases like influenza have examined place-to-place transmission patterns for geographic units as large as states and counties (bozick & real, 2015) with important lessons that have inspired further research. the network measures used in this study were limited by the data access constraints to information updated on an annual basis, and thus they do not capture the fast-occurring changes during this ongoing pandemic. while these measures captured the commuting network prior the pandemic, those links have likely been weakened by layoffs or remote work transitions. still, the information on the covid-19 rates within the commuting network was captured as it changed over time. given that the pandemic likely contributed to weakening rather than strengthening preexisting commuting links across places, the fact that nevertheless we still see strong effects suggests to us that adjustments in the future to these data to reflect the rapid changes in employment status will likely reveal even stronger effects of commuting exposures to covid-19. many businesses across the country have restricted their employment during the covid-19 pandemic, some have even closed temporarily or permanently, while others allowed employees to work remotely for the purpose of "social distancing" and in the hope of "flattening the curve" (bartik et al., 2020) . understanding how these mobility changes and restrictions contribute to containing the covid-19 transmission is an important next step for future research. moreover, it is known that some population groups are more likely to be in occupations (e.g., health care providers, grocery workers, bus drivers, meatpacking workers) that have been on the frontlines in the fight against covid-19, unable to comply with social distancing recommendations and policies. understanding how workplace networks and risk transmission differentially affect disadvantaged and minority populations is of great importance in future research. importantly, also understanding the types of workplace connections and other social network-based distancing strategies that can work best to contain the pandemic risk without further isolating the most vulnerable populations and communities is essential. mapping the intercounty transmission risk of covid-19 in the impact of covid-19 on small business outcomes and expectations social network-based distancing strategies to flatten the covid-19 curve in a post-lockdown world the role of human transportation networks in mediating the genetic structure of seasonal influenza in the united states random forests social distancing, internet access and inequality demographic science aids in understanding the spread and fatality rates of covid-19 covariate balancing propensity score for a continuous treatment: application to the efficacy of political advertisements network spillovers and neighborhood crime: a computational statistics analysis of employment-based networks of neighborhoods neighborhood isolation in chicago: violent crime effects on structural isolation and homophily in inter-neighborhood commuting networks coronavirus resource center covariate balancing propensity score modeling the social and spatial proximity of crime: domestic and sexual violence across neighborhoods fairness in algorithmic decision making: an excursion through the lens of causality algorithmic bias in recidivism prediction: a causal perspective a causal lens for peeking into black box predictive models: predictive model interpretation via causal attribution a new metric of absolute percentage error for intermittent demand forecasts covid-19 through the lens of gerontology spatial econometrics. in handbook of research methods and applications in economic geography who drives to work?: commuting by automobile in the united states assessing differential impacts of covid-19 on black communities poisson-based regression analysis of aggregate crime rates improving propensity score estimators' robustness to model misspecification using super learner marginal structural models and causal inference in epidemiology causal inference using potential outcomes: design, modeling, decisions stata statistical software: release 16 population density and basic reproductive number of covid-19 across united states counties the disproportionate impact of covid-19 on racial and ethnic minorities in the united states super learner social distancing responses to covid-19 emergency declarations strongly differentiated by income impacts of road traffic network and socioeconomic factors on the diffusion of 2009 pandemic influenza a (h1n1) in mainland china key: cord-206391-1dj285h8 authors: yan, donghui; xu, ying; wang, pei title: estimating the number of infected cases in covid-19 pandemic date: 2020-05-24 journal: nan doi: nan sha: doc_id: 206391 cord_uid: 1dj285h8 the covid-19 pandemic has caused major disturbance to human life. an important reason behind the widespread social anxiety is the huge uncertainty about the pandemic. one major uncertainty is how many or what percentage of people have been infected? there are published and frequently updated data on various statistics of the pandemic, at local, country or global level. however, due to various reasons, many cases were not included in those reported numbers. we propose a structured approach for the estimation of the number of unreported cases, where we distinguish cases that arrive late in the reported numbers and those who had mild or no symptoms and thus were not captured by any medical system at all. we use post-report data for the estimation of the former and population matching to the latter. we estimate that the reported number of infected cases in the us should be corrected by multiplying a factor of 220.54% as of apr 20, 2020. the infection ratio out of the us population is estimated to be 0.53%, implying a case mortality rate at 2.85% which is close to the 3.4% suggested by the who. with the quick spread at the global scale, the covid-19 pandemic has become one of the most tragic disasters in human history, with a worldwide confirmed cases of 2.74 million and death toll at 192k as of april 24, 2020. rising trend of these numbers still remains in multiple countries right now. the most risky aspects about the coronavirus are the long incubation period and the existence of a large number of asymptomatic cases. these cause a substantial proportion of infected cases not tracked by medical systems. for better policy making and disease control, as well as to reduce the widespread speculations among the public about the extent of the disease spread, it is of significant interest to give an estimate on the missing counts. specifically, when the pandemic gradually becomes under control, the world is considering the resume of normal business. this requires a prudent assessment of the potential risk. inevitably, such an assessment would involve the estimation of the number of asymptomatic cases when such cases are still active. however, the task of estimating the number of those undocumented cases is very challenging, exactly because of the long incubation period and those asymptomatic cases. in this work, we will present a structured approach for such an estimation task and give an approximate estimate at the us national and state level. the remainder of this paper is organized as follows. in section 2, we will describe our approach. this is followed by a presentation of results in section 3. finally we conclude in section 4. statistically, the estimation of the number of unreported cases is related to the problem of inference with missing data [5] or censored data [2] . however, certain characteristics of the coronavirus epidemiology allow us to take a different approach. we adopt a structured approach, inspired by the diagnostic analysis of remote sensing studies [8] where the errors in the land use classification were decomposed according to their sources. our approach is illustrated by figure 1 . the missing counts in the reported numbers come from two sources. one is those cases for which, at the report date, the symptoms were not severe enough and the affected individuals would not test for infection; however, they would eventually visit some medical facility and test for potential coronavirus infection. we call such cases the type i cases, and the waiting period before the onset is termed as the incubation (or dormant) period. this is illustrated as the filled blue bars in figure 1 . at the time of report, all such cases are still in dormant status thus are missing in the reported number. the second source of unreported cases are those who were infected but are either not aware of it or with symptoms too light to visit the medical facility, and later on recovered without any particular medical treatments. we call such cases the type ii cases. the type ii cases never show up in any reported numbers, thus leaving too little clue for estimation. but we cannot overlook such cases, be-cause the number of such cases could be potentially large and such individuals form an important infecting source. for the rest of this section, we will describe our method for the estimation of each of the two cases. the estimation of the number of type i cases is facilitated by a crucial observation. though not included in the reported number while in the dormant period, such cases would eventually be reported when the symptoms become so severe that the individuals have to seek medical treatments. by that time, those previously missed cases at the original report date (which was a few days ago) would be counted towards infected cases at some later report dates (though one would not know at which particular report date). such numbers should be included at the original report date but surface only several days later; for this reason we call them delayed counts. if there is a way to estimate such delayed counts or their total, then one can estimate the number of type i cases for the original report date. it will be instructive to consider a simple ideal case where all infected cases have a dormant period of 7 days. in this ideal case, the numbers d 1 , d 2 , ..., d 6 in figure 1 are exactly the number of cases who were at their 6 th , 5 th , ..., 1 st day of infection, respectively, when counted at the original report date. if we assume that the incubation period is 7 days, then these are all the number of type i cases missed at the original report date but reflected perfectly later in the number of newly reported cases during the 6 days time window following the original report date. so the total number of type i cases at the original report date can be calculated by their sum, the reality is, however, complicated. first, the length of the dormant period varies for individual cases. also, during the post-report time window, newly infected cases may arise and be reported. thus the number of newly reported cases at any particular day within this time window might be mixed, in the sense that it would include both cases that are infected both before (but were in dormant period) and after the report date. the former case will not pose a problem as anyway such cases would be counted towardsd type1 though cases infected at the same day may now contribute to different d i 's. the latter case is undesirable but could be corrected, to a certain extent, by the truncating effect when we only sum up the counts in the post-report time window up to a length of t days. that is, those cases with a dormant period extending more than t days post-report will be truncated and not included ind type1 , with the total count of such truncated cases being 'cancelled out' by the newly infected cases within the post-report time window of a properly chosen length t . this leads to an estimate for the number of type i cases aŝ where d i are now the number of cases reported at the i th day after the original report date. we can let t take a value around or slightly larger than the mean of the incubation periods. in the appendix, we give a justification on why our estimate,d type1 , would be a reasonable one. if we can keep track of the delayed estimated type1 through time, then we can get a time series which, upon smoothing, could be used to estimate the current missing type i counts. for such an estimation to be feasible, we have two requirements. one is that the daily reported counts through time would not change too abruptly. thus, our approach would not work well when the infection trend still rises very rapidly. during such a period, the safest strategy might be to strictly enforce social distancing. but as the overall situation is gradually under control, our estimation would apply. the other is knowledge of the duration of the incubation period. according to many studies [3, 1, 4] , the incubation periods has a median of around 4-7 days. while further studies or data analysis is required to confirm this, we take t = 7 in our estimation. additionally, it should be cautious that our estimation is valid assuming that the test of coronavirus is sufficiently carried out for the population of interest; insufficient test would render an underestimate of the number of type i cases. in figure 2 , we plot the ratio of estimated type i cases w.r.t. the reported number of cases for connecticut (ct) and massachusetts (ma) since mar 8, 2020. these two states were chosen as they are similar in many aspects, so we expect their ratio of type i cases out of reported cases would be similar. in figure 2 , there is an initial difference in ratios of type i cases in these two states, which we attribute to the late response and the small number of cases tested in ct. later, these two states exhibit strikingly similar trend, which is quite xpected. we also explore the effect of using different values of t where 6 and 7 are used. again, initially the resulting estimation are fairly different, which indicates the rapid spread of coronavirus and the rapid rise of infected cases. gradually, the difference in the resulting estimations diminish, which implies that the choice of t = 7 leads to a fairly stable estimation at late stages of disease spread. similar observation can be made for the estimation of type i cases in us. this is shown in the right panel of figure 2 . the estimation of the number of type ii cases is extremely challenging, as there is barely anything observable. our main strategy in the estimation is based on the matching of population statistics-using what we see well to infer what is missing. when we group reported infection cases, we notice a significant discrepancy in the count statistics by age groups between reported cases and the us population. we expect that, while people in most age groups in the population have a similar chance of being infected, those type ii cases occur much more often in age groups 20-64 but rarely for people of age 65+. this is because people at age over 65 typically have a relatively weaker immune system along with some pre-existing medical conditions. once they are infected with coronavirus, a slight symptom would prompt them to seek medical treatments. as a result, such cases are very likely to be discovered. thus reported counts about such age groups would be more accurate and can serve as a reference to correct counts for other age groups. on the contrary, cases for the 20-64 age group are easy to be overlooked or not noticed, unless their status is deteriorated. the reported counts for these age groups thus require a correction (termed as age correction). the age group of 85+ is more vulnerable to infections, as they typically live in the senior centers or extended-care nursing facilities which, as a matter of fact, have a very high risk of infection. the case statistics for this age group would be very thorough, but many in this age group get infected simply because they share a very confined living space with many other equally vulnerable seniors, and the infection of any one (including staff) in a senior center will quickly spread to the rest (to certain extent, one may think of this as a party of many people during covid-19). so statistics in this age group would not be a reliable reference for population match, since people in other age groups have a very different mobility pattern (the infants interact with the world through their parents thus have a chance of infection not so different from the general population). the main assumption we use for population match is that all the people, with an age in the range 0-84, have similar chance of being infected. as a result, the counts at different age groups would be proportional to their respective percentage in the population; we call such an approach population matching. let r pop and r case be the proportions of the reference group in the population and in the reported cases, and x pop and x corrected be the the respective proportions for the target group, respectively. then r case : r pop = x corrected : x pop , and the corrected percentage in the infected cases for the target group can be calculated accordingly. as we argue before, the case statistics for age groups 65-74 and 75-84 are reliable, but those for ages 0-64 are incomplete and consist of substantial missing data, and we will use the reliable portion of the data to infer or correct statistics about the incomplete part of the data. a simple calculation reveals that age groups, 65-74 and 75-84, according to table 1, have a similar ratio of cases percentage: population percentage, i.e., 9.00 : 4.70 ≈ 17.00 : 9.32. thus, we can pool counts from these two groups and obtain r case : r pop = (9.00 + 17.00)/(4.70 + 9.32) = 1.8544. this yields the corrected ratio as the bottom row of table 1 . adding up numbers in the bottom row gives a total of 187.94%, implying that we should expand the reported counts by 87.94% in order for the reported case counts to match the population statistics across age groups. this gives the ratio of type ii cases over the reported cases. an interesting question is, will estimated counts of type ii overlap with that for type i cases? we claim that this will not, at least not significantly, so the addition of estimated counts for type i and ii cases is valid. the reason is that, type i cases still contribute to the reported numbers, at a delayed time though. these delayed cases can be thought of as a sample from the reported cases (assuming that the reported cases have a stable proportion when breakdown by age groups). the inclusion of type i cases will not change the age-breakdown proportions. thus, after the inclusion of type i cases, we still have the same age-breakdown proportions and thus require an age correction. we apply our approach to each of the 50 states and the us. the data are available from wikipedia [7] . due to the large variation of the population at different states, we calculate the ratio of missing cases out of the number of reported cases for individual states. the ratio for type i cases is shown in figure 3 . due to the lack of reported case data for individual states by age groups, we use the overall estimate, which is 87.94% according to discussions in section 2.2, for the ratio of type ii cases for all the states. the overall ratio for type i cases for the us is estimated to be 32.60%. combining with the 87.94% ratio for type ii cases, this gives an estimated ratio of missing cases versus the reported number at 120.54%. in other words, the reported number should multiply by a factor of 220.54% to reflect the true number of infected cases. with the unreported numbers estimated, we can estimate the infection ratio, defined as the ratio of the number of infected cases out of the population. the overall infection ratio of the us is estimated to be 0.53%, or 1.75 million, as of apr 20, 2020. if we use the associated death toll at about 50k, then the case mortality rate is calculated as 2.85%, which is close to the who suggested estimation of 3.4% [6] . the infection ratio for individual states are visualized as heatmap in figure 4 . heavily hit states are ny, nj, ct, ri, ma, and la with infection ratio estimated at 2.61%, 2.11%, 1.22%, 1.15%, 1.31% and 1.04%, respectively, as of apr 20, 2020. the trend of infection ratio and cases by time for these states is shown in figure 5 . it can be seen that, except la, the infection ratio for all other five states are still rapidly increasing. nj shows a similar growing pattern as ny, while the three new england states, ct, ri and ma, are similar. we have proposed a structured approach for the estimation the number of infected cases not included in the reported number at a given date. we distinguish two types of 'missing' cases, those cases which were infected but are still during the dormant period and those asymptomatic cases which later self-recover without medical treatments. the number of these two types of cases are estimated by accumulating reported counts within a properly chosen post-report time window and by population matching. the reported number, as of apr 20, 2020, of infected cases in us should be corrected by multiplying a factor of 220.54%. the overall infection ratio of the us is estimated to be 0.53%, with a case mortality rate of 2.85% which is close to the recommended 3.4% by who. our estimation can potentially be used for risk assessment. the infant age group may worth further consideration as people in this group are much less risky than other age groups as they interact with the rest of the world through their parents, so the number of cases for this group may need to adjust accordingly to reflect the true risk. denote the number of cases that were infected one day, two days and so on before the report date (for which we use n 0 ). here we limit to type i cases as we can conveniently assume that type ii cases have an infinite incubation period. then the expected number of cases that are discovered during the time window of t days following the original report date is calculated as for simplicity, we would assume that all the en −i 's take a constant value n . we feel that this should not be too unrealistic as we would expect that the number of newly infected cases per day do not vary too much when the pandemic reaches a stable stage (those at the very far distant past would be small, but they carry a very small fraction of the total number so could be ignored). also, we abuse the notation a bit by using d . 's to also indicate the expected value of the associated random variable; the exact meaning will be determined by the context. then equation (1) can be rewritten as under the same assumption, the number of new cases generated during the post-report time window of length t days is (t − i) · p (i − 1 ≤ x < i) + n · p (x < t ) thus the total number of reported cases during the t days post-report time window is calculated as d a + d new = (t − 1) · n + n · p (x ≤ t ) = t n − n · p (x > t ). assuming that random variable x has a finite mean, then we have p (x > t ) ≤ ex/t µ/t, implying that the estimated number of type i cases satisfies the actual number of cases that have accumulated but not being discovered before the report date consists of missing cases during the previous t days and those even earlier cases, which has an expected value (2) indicates that the mean number of type i cases equals the product of the mean daily infected cases of type i and the mean length of the incubation period, which is consistent with the ideal case discussed in section 2.1. let t = (1+ǫ)µ, then we have the following error bound for the estimated number of cases of type i |d type1 − d type1 | ≤ µn · max(ǫ, |ǫ − 1/t |). it follows that the relative error of the estimate satisfies |d type1 − d type1 | d type1 ≤ max(ǫ, |ǫ − 1/t |) = max(ǫ, |ǫ − 1/((1 + ǫ)µ)|). for a given µ, one can pick ǫ to optimize the above bound. for example, when µ = 7, one can take ǫ = 0.07 to achieve a relative error bound of about 7%. incubation period of 2019 novel coronavirus (2019-ncov) infections among travellers from wuhan, china nonparametric estimation from incomplete observations the incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data statistical analysis with missing data world health organization. coronavirus (covid-19) mortality rate covid-19 pandemic in the united states a structured approach to the analysis of remote sensing images in this appendix, we will give a justification on our estimation algorithm. we show that the error between our estimate,d type1 , of the number of type i cases and its actual value d type1 is small in expectation under reasonable assumptions about the distribution of the incubation periods.denote by random variable x the length of the incubation period, and for simplicity we further assume that x ≥ 0 takes integer values. let n −1 , n −2 , ... key: cord-006882-t9w1cdr4 authors: nan title: royal academy of medicine in ireland date: 2012-07-22 journal: ir j med sci doi: 10.1007/s11845-012-0833-6 sha: doc_id: 6882 cord_uid: t9w1cdr4 nan the tuberculin-skin-test is the most commonly used test to screen for tuberculosis worldwide. in most cases it is administered by the most junior member of the medical team. there is some anecdotal evidence to suggest that junior doctors have limited knowledge of how to administer and interpret this test correctly. the aim of this audit was to assess the proficiency of interns and senior-house-officers in st. vincent's university hospital at performing the tuberculin-skin-test and improve standards. a multiple choice questionaire was used to assess doctors' knowledge of tuberculin-skin-test administration, interpretation, alternatives and the availability and awareness of information regarding the tuberculin-skin-test within the hospital. 45 interns and senior-house-officers were assessed. of those questioned 75.6 % correctly identified intradermal as the method of administration. 66.7 % knew to correctly assess the induration at 48-72 h, but only 29 % knew that the induration should be measured across the forearm. only 11.6 % were aware of the information leaflet within the hospital. 92.9 % of senior-house-officers correctly identified intradermal injection as the method of administration. it is apparent that the tuberculin-skin-test is often administered and/or interpreeted incorrectly. we recommend formal teaching for junior doctors in this area, coupled with improved availability of the information leaflet. mucinous tubular and spindle cell carcinoma (mtscc) is an extremely rare type of kidney tumour that has only recently been described, with less than eighty cases in the literature. this was only recognized as a specific entity in the world health organization 2004 classification of renal cell carcinoma (rcc). mtsccs are polymorphic renal neoplasms characterized by small, elongated tubules lined by cuboidal cells with cords of spindled cells separated by pale mucinous stroma. we report the case of a 57-year old lady who had an incidental finding of a mass in her right kidney. the radiological features were consistent with a rcc and following a multidisciplinary team discussion she underwent a laparoscopic radical nephrectomy. macroscopic examination revealed a well circumscribed 6.5 9 6 9 6.5 right lower pole mass. histologically it was composed of elongated tubules, small tubules and papillary structures with a necrotic centre. the cells demonstrated cuboidal and spindle cell morphology. histological grade was fuhrman grade 2. subsequent ct thorax abdomen and pelvis staged the tumour as pt1b. the majority of mtsccs are indolent, and there is only one report of a distant metastasis which responded favourably to adjuvant sunitinib. to date there is no international consensus on long term surveillance of these patients. due of the favourable prognosis with this type of tumour, mtscc must be differentiated from papillary renal cell carcinoma to avoid administration of excessive adjuvant treatment to patients. this is the first recorded case of this recently classified, rare tumour in ireland. this incidental finding of solid pseudopapillary neoplasm (spn) was discovered when a 59-year-old female underwent a chest x-ray to investigate a wheeze. a subsequent ct abdomen revealed a 10 cm well circumscribed mass adjacent to the tail of the pancreas. this neoplasm had reached a significant size of 10 cm appreciable on radiological imaging and yet was asymptomatic and not palpable on physical examination. laparatomy revealed a highly haemorrhagic and calcified mass emanating from the pancreas. this was adherent to the omentum, distal pancreas and splenic vessels. distal pancreatectomy was performed with en bloc resection of the mass. repeated ct scans at 3, 6 and 12 months failed to demonstrate recurrence. solid pseudopapillary neoplasms are rare entities accounting for between 0.13 and 2.7 percent of pancreatic tumours. this neoplasm has a predilection for females under the age of 35. these tumours are indolent and usually reach a large size before detection. diagnosis is confirmed on histology and complete surgical excision of localised tumours is curative. we aimed to assess the prevalence of smoking among patients with vascular disease and the role of the health care profession in encouraging smoking cessation. 100 patients who attended the vascular outpatient department were surveyed over a 2 month period in 2011. patients gave verbal consent to partake in the audit and the surveyor entered the responses into a standardised questionnaire response sheet. 29 % of patients were current smokers, 39 % ex-smokers and 32 % had no history of smoking. 38 % smoke over 30 cigarettes per day and 66 % had a smoking history spanning over 30 years. just 58 % of smokers who are under the care of the vascular service have been advised to give up smoking in the past by a healthcare professional. smoking has long been established as a major modifiable risk factor for the development of atherosclerosis however 29 % of patients attending the vascular service continue to smoke. just half of patients who were offered smoking cessation advice found it was effective. therefore a system needs to be put in place where all vascular patients are advised of the benefits of smoking cessation and the manner in which information is dispensed needs to be revised. to investigate the optimum location for the teaching of procedural skills to medical students english n, o'flynn s introduction: procedural skill training is a vital component of medical education. traditionally it has been teaching hospital based however general practice rotations may provide greater opportunities than previously thought. aims: this study aimed to ascertain whether a general practice setting or a teaching hospital setting provided a better environment for acquiring procedural skills in terms of opportunity to practice and the variety of skills performed. the correlation between end of year osce results and the amount of procedural skill exposure was also looked at. methods: a cross-sectional quantitative study which included all 107 3rd year medical students at ucc was conducted. a log book listing 28 procedural skills was made available to all students before beginning both general practice and teaching hospital rotations. students were instructed to indicate on the log when they performed any of these skills and in which location. logs were returned to medical school. data was obtained and analysis performed using spss17. results: a response rate of 80 % was achieved. 92.9 % of students performed more skills at the gp setting. 40.5 % (n = 34) did not perform any skills while in a teaching hospital 0.17 skills were performed significantly more frequently in a gp setting while 5 were performed more frequently in a teaching hospital. students who performed a high number of skills in one location were no more likely to perform a high number in the other. conclusions: students were able to take greater advantage of procedural skills opportunities in a gp setting. as this was the students first clinical year it is likely that the one-to-one teaching scenario provided them with a more suitable location to practice skills for the first time. this study also highlighted the diverse nature of procedural skills which a general practice setting can provide. accuracy of sentinel node biopsy in determining the requirement for second axillary surgeries in t1-t2 breast cancer with retrospective application of z0011 criteria background: lymph node status is the most important prognostic marker in breast cancer management. in tandem with breast conser-vative surgery, surgical approaches to the axilla have also become less invasive thus decreasing the morbidity associated with axillary clearance. the acosog z0011 trial reported no difference in survival in patients undergoing sentinel lymph node biopsy (slnb) alone versus axillary lymph node dissection (alnd) in t1-t2 tumours. our aims were to establish whether sentinel lymph node biopsy was a true representative of axillary burden. we also analysed whether retrospective application of criteria from z0011 trial would have prevented patients undergoing second axillary surgery. methods: all patients with t1-t2 tumours undergoing sentinel node biopsy were included in our study (n = 1019). analysis of our prospectively updated breast cancer database was performed. minitab version 16.0 was used to carry out statistical analysis of the data results: 1019 slnb procedures for t1 & t2 tumours were performed over a 7 year period. 730 patients were reported as histologically negative and 289 were positive. of the lymph node positive group, 223 patients progressed to axillary clearance. staging of 149 patients remained unchanged with only 74 patients having [2 axillary lymph nodes reported as positive. 72 patients from the slnb negative group also had an axillary clearance. 5 of these patients had further axillary disease with 1 patient being upstaged having [2 axillary lymph nodes positive. with retrospective application of z0011 criteria 66 % of patients would have avoided second axillary surgery. conclusions: sentinel node biopsy is a strong indicator of axillary tumour burden. this study highlights the accuracy of sentinel lymph node biopsy in staging disease and representing overall tumour burden. flaherty ra, kelly bd, coyle d, quinlan mr, d'arcy ft, rogers e, jaffry sq we report the first case of a spontaneous right nephrocutaneous fistula (ncf) with an accompanying fistula limb communicating with the right ureter. a 65-year-old man presented with a groin mass, which was initially diagnosed as a hernia. he was scheduled for an inguinal hernia repair. upon incision there was extravasation of urine from the wound and the procedure was abandoned. a ct urogram identified a ncf running from the right lower pole calyx, anterior to the psoas muscle and emerging on the right groin skin with an accompanying fistula limb communicating with the right ureter. during the course of investigation it was discovered that the patient was suffering from chronic indolent calculus pyelonephritis which led to the formation of both aberrant pathways from the kidney and the ureter and that both had calculi located at their origins. the patient was first treated with a nephrostomy and ureteric stenting to relieve urinary obstruction and after this failed to resolve the fistula, was successfully treated with percutaneous nephrolithotomy for removal of the calculi and fibrin glue injection into the fistula. this case is one of only a few reported cases of spontaneous nephrocutaneous fistula and the anatomy of the fistulous tract in this case is very unusual and posed a particular challenge for surgical management. this case report further advocates the use of fibrin glue in the management of complicated ncf. this is a retrospective case study. there were six cases of ocular tuberculosis over the 6 year period, one annually, four of whom are women, with ages ranging from 17 to 46 years old. two were foreign-born. all patients presented with reduced visual acuity. four developed posterior uveitis, one anterior uveitis and one panuveitis. this was also complicated by vitritis, retinal detachment and retinal vasculitis in four. the median duration of symptoms until commencement of treatment was 3 months. all cases had a positive mantoux and one case had evidence of pulmonary tuberculosis on chest x-ray. tuberculosis was isolated in two cases. the intended duration of anti-tuberculous therapy for all patients was 9 months. vision improved in all cases. ocular tuberculosis is rare in developed countries, with prevalence ranging from \1 to 7 %. however, it is important to be considered in all cases of uveitis. despite the use of pcr, most cases are presumptive. this leads to delayed commencement of therapy causing further complications. a high index of suspicion is required. we describe the case of a 45-year-old gentleman who presented to our emergency department (ed) with a very unusual complication of central venous catheterisation. this resulted in spontaneous extrusion of a retained intravenous guide wire from the base of the occiput. this has been described only once previously in the literature, but not at such a delayed time interval from insertion [1] . this 45-year-old gentleman presented to the ed reporting that he felt the point of a sharp object irritate his finger in his midline occipital area. he had successfully retrieved approximately 3 cm of a thin metal wire. he had a history of rheumatic fever and had undergone an elective aortic valve replacement 5 years previously, necessitating central venous cannulation. he had remained asymptomatic up to this time. plain radiography of his neck revealed a short segment of wire in the posterior spinal musculature. this segment of wire (approximately 25 cm) was removed manually with minimal force and minor manipulation. the procedure was uncomplicated and the patient was discharged shortly afterwards. retained foreign bodies may migrate slowly over many years eventually extruding from the body, without any serious complications. events such as retained or lost guide-wires are rare. this phenomenon may become more frequent with increasing complexity of medical care and with increasing use of cv catheters in the treatment of sepsis and other emergent critical conditions. physicians should be aware of the possibility of retained foreign bodies and should be somewhat re-assured by reports of simple uncomplicated removal. we present the case of a 79-year-old gentleman who was recently admitted with symptomatic right heart failure and new onset atrial fibrillation. our patient had been treated in the community for symptoms suggestive of ccf but had not previously been investigated. of note, he has no history of a chronic inflammatory condition and no symptoms suggestive of an underlying neoplastic process. on presentation he was also noted to have evidence of an arthropathy affecting his knees and ankles and bipedal oedema. renal function was abnormal with a urea of 17.9 and a creatinine of 116. urinalysis was positive for protein and 24 h urine collection for protein is ongoing. liver enzymes were also elevated and revealed a cholestatic picture. echocardiogram showed a reduced ejection fraction of 30 % and findings consistent with amyloidosis. biopsy of abdominal fat pad at time of writing is pending. amyloidosis refers to an uncommon group of disorders characterised by extracellular tissue deposition of a variety of proteins in an abnormal fibrillar pattern which are resistant to degradation. it can occur alone (primary) or can complicate many chronic inflammatory conditions (secondary). the major sites for clinically reported amyloid deposition are the kidneys, heart and liver. clinically patients present more often with right heart failure; pulmonary oedema is rare. amyloid infiltration results in increased echogenicity on echocardiogram and gives a ''sparkling'' appearance to the myocardium. biopsy is diagnostic. this was achieved using a retrospective review of all children receiving gh therapy (n = 53) over a 5-year period (october 2006-october 2011). 33 of 53 patients on gh therapy had ghd. of these, 18 had ighd (15 male) and 15 had cphd (8 male). all had appropriate work-up and follow-up. age at presentation to endocrinology was older in the ighd group (mean 8.2 years) than in those with cphd (mean 4.4 years). 17/18 children with ighd presented with short stature, compared to only 4 with cphd; the remainder presenting with clinical features of other pituitary hormone deficiencies. the mean height centile at diagnosis was lower in the ighd group (0.4th) versus the cphd group (9th). mri brain/pituitary was abnormal in the majority of patients (14/15) with cphd, compared with 1/18 with ighd. both groups responded well to treatment and height increased by one centile on average at 12 months. all patients diagnosed with ghd at temple st had appropriate work-up and follow-up. children with ighd presented later than those with cphd, and had shorter height centiles at diagnosis. there was a strong male predominance in children presenting with ighd, which may reflect psychosocial factors. structural pituitary abnormalities were more common in those with cphd, and their clinical presentation was more varied. response to therapy was similar in both groups. background: out of hospital cardiac arrests have poor survival rates approx 1-9 %. improving outcomes in ireland have been seen in the past decade. better outcomes are seen if arrest is witnessed and when bystander basic life support was initiated. worse prognosis is seen in a rural setting due to delay in paramedic response times and in administration of advanced cardiac life support. case report: a 60-year-old donegal male experienced chest pain in his rural home and subsequently cardiac arrested. his spouse, whom 3 months prior had trained in basic life support as part of a fas course contacted the 'out of hours' gp and ambulance service and commenced cpr. the gp failed to reach the house and the first ambulance broke down. on arrival of second ambulance, one person cpr had been administered for [40 min. paramedics delivered 10 dc shocks and intubated the patient. in the regional hospital pc was admitted to the intensive care unit for 11 days being managed with acute respiratory distress syndrome (ards). transoesophageal echocardiogram on day of admission showed ef 45 %. ct brain carried out showed no acute pathology. once stable, angiography was carried out showing multivessel disease. discussion at st james's hospital (sjh) cardio-thoracic conference resulted in plan for transfer and pci. in sjh pressure wire study of left anterior descending (lad) coronary artery was positive and stenting (drug-eluting) commenced. lad 96 stents, left circumflex 92 stents and right coronary artery (rca) 92 stents. patient is currently well with no overt signs of hypoxic brain injury and is enrolled in cardiac rehabilitation programme. discussion: this is an incredible case of an out of hospital cardiac arrest. elapsed time in the chain of survival events would predict a negative outcome. however, adequate cpr was administered preventing long term brain injury and certain death. this highlights the need for a greater community-based cpr skill base. recently citalopram and escitalopram have been reported to cause dose dependent qtc prolongation. prescribing guidelines have since changed including contraindication of co-prescription with other qtc prolonging agents. domperidone is a dopamine antagonist widely used as an anti emetic. qtc prolongation and ventricular arrhythmias have also recently been highlighted with domperidone and, since november 2011, caution advised when prescribing domperidone, particularly in patients [60 years of age, or at doses [30 mg/day. in this audit, we aimed to study whether information on qtc prolongation affects prescribing practice by looking at the prescription of a commonly used medical drug, with recently highlighted qtc effects, and its co-prescription with psychotropics. a list of drugs with substantial evidence for qtc prolonging effects was obtained. a kardex review was completed from acute medical and surgical; long stay and rehabilitation wards. kardexes with domperidone were reviewed for dose, age, gender and co-prescription of other qtc prolonging agents. of 820 surveyed kardexes, 10 % (n = 81) were prescribed domperidone. 63 % were[65 years. 38 % were on[60 mg/day. coprescription with another qtc prolonging agent seen in 37 % of cases; of these 77 % were psychotropics, most commonly citalopram (n = 8). four patients were co-prescribed [1 qtc prolonging agent. qtc prolonging agents were commonly co-prescribed with domperidone, which continues to be used even in at-risk groups. psychotropics were the most likely class to be concurrently prescribed. further work in this area is necessary to inform clinical psychiatric practice and encourage responsiveness to new evidence regarding cardiac risk. the development of a mathematical model to predict the time to osteoporosis (tto) using dexa scanning background: dual-energy x-ray absorptiometry (dexa) is the gold standard used for measuring bone mineral density and such readings are currently used to predict osteoporosis and osteoporotic fractures. however, no similar prediction model has been developed to identify the time it will take to become osteoporotic based on dexa scanning. objective: the aim of this study was to develop a mathematical model to determine the tto based on two or more dexa scans with tto defined as the age at which the patient will enter the osteoporotic t-score range. methods: fifty patients who had previously undertaken five dexa scans were identified from the dexa database. t-scores were graphed against patient age using graphpad prism software. straight line curves for the most recent scans and cumulative scans were generated with the age at which the curve intersects t = -2.5 being classed as tto. results: the mathematical model developed successfully predicted the time to osteoporosis for each patient, as well as creating a cumulative osteoporotic trend based on total dexa scans performed. additionally, if the patient was classified as osteoporotic following dexa scanning, the model also successfully predicted the time out of osteoporosis. implication: the tto provides a simple and informative parameter of dexa scanning that a patient can immediately comprehend and understand, while also providing a more simple measure to monitor response to therapy. based on the results presented tto can be incorporated into future dexa scans result summaries. further research will involve validation of this tool. an audit of clinical outcomes in transcervical resection of the endometrium compared to outpatient balloon thermablation anglim bc, von bunau g department of gynaecology, adelaide and meath children's hospital, tallaght, dublin thermablation was introduced to the coombe in november 2009 and thus far it has provided a quick and effective means of treating women with menorrhagia refractive to medical treatment. a retrospective audit was carried out over a 2 year period in tallaght hospital from november 2009 to october 2011. the aim of the study was to compare the efficacy of balloon thermablation compared to transcervical resection of the endometrium (tcre) with or without mirena coil insertion, in the treatment of menorrhagia. 48 patients in total were studied, 24 of which underwent a tcre, and 24 of which underwent balloon thermablation. out of those who underwent a tcre 16 had successful treatment of the menorrhagia and 6 and 12 weekly follow up, 5 had continued menorrhagia which may require a future hysterectomy, however one of which was due to a large fibroid, and one patient described a reduction in menorrhagia however an increase in dysmenorrheoa. out of those who underwent thermablation 15 were treated successfully, 6 had continued menorrhagia to be considered for hysterectomy, 2 had reduced bleeding but increased dysmenorrhoea and one patients symptoms had resolved however she then developed idiopathic thrombocytopenia purpura which led to a recommencement of symptoms. one can therefore conclude that there are both pros and cons to both procedures, tcre being less expensive, however it requires general anaesthesia and may require mirena insertion. thermablation is more expensive however it is a quick outpatient procedure (2 min, 8 s) and is done under local anaesthetic. akinmoluwa s, tormey s department of breast surgery, mid-western regional hospital, limerick breast pain is a common problem especially among women of reproductive age. it accounts for a great percentage of gp visits by young women. it represents a huge proportion of gp referrals to the breast clinic. the palpable effects of this include, among others, an increase in waiting time, increase in healthcare cost, stress on the limited resources and ultimately a decrease in quality of care. in this era of unfavorable economic climate, it is prudent to sanitise our healthcare systems by way of identifying and eliminating practices that have not been proven to alter the course of care. in this study, i reviewed the number of breast pain cases referred to ms tormey's breast clinic in the month of march. the objective of this study is to determine whether or not all breast pain complaints should be referred for specialist review. to achieve this objective, i reviewed all the cases of breast pain referred to the breast clinic in march. the table represents my findings. it is evident from the study that hormonal mastalgia accounts for majority of breast pain complaints in women of reproductive age while a few other cases are attributed to musculoskeletal and other benign disorders. these women, with no risk factors, only need reassurance and pain relief. they do not require specialist intervention. alrashed d introduction: anaemia is a common finding in the elderly population. it may be a sign of chronic disease, underlying malignancy, nutritional status, or blood loss. depending on the classification of anaemia, further investigations such as haematinics and endoscopy may be warranted, as replacing the haemoglobin deficit is never a definitive treatment. objective: to determine the prevalence of anaemia in a population of elderly in-patients and whether further screening was performed. methods: this was a cross-sectional review of all patients 65 years and older under a gastroenterology, a rheumatology, and three geriatrics services at a large teaching hospital. patients' full blood counts were reviewed during their current admission. anaemic patients were then categorised based on anaemia subtype and whether haematinics were investigated. results: out of 116 patients under the five teams, 83 were 65 years and older. 37 out of 83 of these elderly patients were anaemic. none of these subjects had microcytic anaemia during their current admission. 27 out of 37 of these patients had normocytic anaemia. 10 out of 37 of anaemic patients had macrocytic anaemia. haematinics were investigated in 17 out of 37, including 13 out of 17 patients with normocytic anaemia and 4 out of 10 patients with macrocytic anaemia. one patient had abnormal haematinics after being investigated for macrocytic anaemia. conclusion: anaemia was very prevalent in the patients selected for this audit, with the normocytic subtype being the commonest. haematinics were investigated in half those patients. anglim b, murphy c aims: to determine the nature of surgical management of ovarian cysts in the adolescent and paediatric population over a 5 year period. methods: a retrospective audit was carried out over a 5 year period in tallaght hospital from january 2007 to december 2011. this audit reviewed cases of ovarian cystectomy, oopherectomy and salpingooopherectomy using both a hospital online database and records of theatre procedures to identify these patients. results: a total of 103 cases were identified. the commonest presentation was due to pelvic pain. there was a total of 43 ovarian cystectomies, 7 fimbrial cystectomies, 8 oopherectomies, 1 bilateral oopherectomy, and 4 salpingo-oopherectomies. a total of 13 appendicectomies were performed in conjunction with these. histology varied from functional and non functional cysts to dermoids and cystadenomas. there were a total of 30 functional cysts, 13 of which were hemorrhagic. there were 7 follicular cysts, 5 fimbrial cysts, 4 paratubal cysts, 8 dermoid cysts, 2 endometrial cysts, 9 cystadenomas, 6 ovarian torsions and 2 fimbrial torsions. of the total amount of procedures performed 24 were done by a paediatric surgeon, and 52 by a gynaecologist. notably there were fewer cases of benign histology in those procedure performed by gynaecologists. conclusions: adnexal surgery is commonly performed in adolescents and children. pathology is frequently benign. there may be a role for more conservative management. we suggest that imaging of the pelvis and tumour markers should be used more frequently in the pre-operative period. protocols may be developed for future implementation. anglim bc, crowley p day surgery is an efficient way of using hospital beds, provided patients are discharged as planned on the day of surgery. unplanned overnight stay following day surgery places an extra burden on a hospital with the busiest accident and emergency department in ireland. a retrospective audit was carried out of one years day case admissions to determine the incidence and causes of unintended or unplanned overnight stay. 692 women were admitted as day cases over the period of 1st july 2009 to june 30th 2010. a total of 129 diagnostic laparoscopies, 67 operative laparoscopies, 23 diagnostic hysteroscopies, ninety-three operative hysteroscopies, 4 tension free vaginal tapes (tvt) and 26 miscellaneous minor procedure were carried out during this time period. 20 women (2.89 %) were retained overnight. the main reason for overnight stay was excessive post-operative pain. additional reasons included voiding difficulties, reactions to spinal anaesthetic, asymptomatic tachycardia and the need for intravenous antibiotics. there was no evidence of inappropriate selection amongst the laparoscopies and hysteroscopies, however 50 % of the patients undergoing tvt required admission. one can conclude from this study that most patients were appropriately selected for day case admission. patients undergoing tvt surgery should be scheduled for a 24 h hospital stay. a vulval clinic is an ideal and efficient way of detecting patients with vulval cancer. once potential patients have been flagged by general practice clinicians or other specialities within the hospital, immediate steps can be taken to rule out malignancy. a retrospective audit was carried over a 10 month period on a new vulval clinic which commenced in tallaght hospital on 26/01/2011. the aim of the study was to determine the need for a specialised vulval clinic for detection of vulval cancer. a total of 29 patients were referred to the four clinics which took place over this time frame. the majority of referrals were from general practice, other referrals were from dermatology, gynaecology and colposcopy clinics. the main reason for referral was vulval pruritis and pain. nine patients were referred with suspicious lesions on clinical examination. a total of 18 biopsies were taken, two of which showed vulval intraepithelial neoplasia (vin). amongst the other biopsies were 4 cases of lichen sclerosis and the remaining 12 biopsies showed non specific dermatitis. one can conclude from this study that a combined dermatological-gynaecological clinic would be of benefit. in addition a 6.9 % detection rate of vin was achieved and therefore highlights the necessity of this clinic. the prevalence of renal disease in patients aged above 65 with normal serum creatinine balasubramanian i, peters c, lyons d and o'connor m department of ageing and therapeutics, mid-western regional hospital, limerick background: the prevalence of chronic kidney disease (ckd) increases with age. older patients have lower lean muscle mass and therefore using serum creatinine alone as marker of renal function can lead to underdiagnosis of ckd. objective: the aim of this study was to review the prevalence of ckd amongst a cohort of elderly patients with normal serum creatinine. methods: doctot application on the smartphone was used to calculate egfr in a cohort of patients over 65 years with a normal serum creatinine on admission. 40 patients were included. this application is based on the mdrd formula (includes age, sex, ethnicity and serum creatinine). patients were then classed into the various stages of ckd. results: of the 40 patients reviewed, 35 had renal disease. interestingly, only 5 had a diagnosis of renal impairment recorded in the medical notes. 20 of the 35 patients had stage 1 ckd and the other 15 had stage 2 ckd. 18 of the 35 patients with renal impairment especially stage 2, were found to be frail females over 75 years. this group also had a number of co-morbidities including diabetes and hypertension. conclusion: egfr is better than serum creatinine alone for assessment of renal function in the elderly. it is important not only for diagnosis but also for appropriate medical investigation and drug prescribing. as the mdrd formula excludes bmi, further research is warranted to compare measurement of egfr using mdrd formula with the cockcroft and gault equation in this older population. chronic obstructive pulmonary disease (copd) is increasingly prevalent worldwide and the main responsibility for it's prevention and management lies with general practitioners. the aim of this audit was to analyse current standards of care of copd patients in a suburbanrural general practice by examining icgp criteria and comparing results with best practice guidelines. the existing coded population of active patients with copd were telephoned and consent was obtained to ask a set of questions designed to examine certain criteria chosen from the icgp copd quick reference guide [1] . . of the patients included in the audit (n = 39), 64 % of patients were male, the mean age was 71 years (sd = 11.6) and 82 % were general medical service (gms) patients. there was poor recording of smoking status, high uptake of influenza vaccines compared to international figures, a lower uptake of pneumococcal vaccinations and an increased need for osteoporosis prophylaxis. vaccination reminders, smoking cessation advice and information leaflets have been posted to these patients. development of protocols for coding and management have been implemented. in conclusion, general practitioners must focus on ensuring optimum managment of copd in the community. clinical audit is a useful tool to initiate change. we assessed the accuracy of continuous non-invasive haemoglobin measurement using the sphb pulse co-oximeter ã� when compared to traditional laboratory haemoglobin assessment in an outpatient antenatal population. a total of 125 women were recruited. traditional laboratory haemoglobin samples were taken and quantified in the hospital laboratory. the sphb pulse co-oximeter ã� was calibrated and the mean of three non-invasive measurements of haemoglobin were recorded prior to venipuncture. bland-altman plots were used to determine acceptability of the new non-invasive test as a replacement for invasive testing in a clinical setting. the mean gestation at haemoglobin estimation was 20.8 (8.6) weeks. laboratory haemoglobin values ranged from 8.8 to 15.1 g/dl with a mean of 12.1 (1.0) g/dl. the range for the sphb pulse co-oximeter ã� assessment was 9.1 to 15.8 g/dl with a mean of 12.6 (1.3) g/dl. non-invasive haemoglobin measurement provides a clinically acceptable accuracy compared to traditional haemoglobin testing. pressure wound therapy (npwt). in this review we examine the role of npwt in wound healing, compare the products available to clinicians in irish hospitals and explore cost implications today. we achieved this through review of online data, peer reviewed articles regarding efficacy, collection and assessment of data from suppliers of npwt and examining the use and cost of npwt in the mater misericordiae university hospital. we summarise the mechanism of action of npwt, patient selection and indications for its use. the products available on the irish market are compared. through examination of these elements we clarify a role of npwt in management of complex wounds and identify flaws in the management of this service that are both wasteful of money and hospital services and create barriers to discharge. potential strategies to correct the issues identified are detailed, for example, funding of the product by the treating hospital rather than by local authorities in the community or selection of less costly devices in negotiation with suppliers by local health authorities. the solutions we outline will potentially have a financial benefit to the hospital, will lead to the more efficacious running of the hospital system and as such will benefit the patient. we conclude that this is a fundamental service and that there are alternative approaches to implementing use of the product in a more efficacious manner. poster 10 q fever: questions to be answered? brandon l, bannon c, fleming c department of infectious diseases, university hospital galway q fever, an aptly named condition, describes infection with gramnegative bacteria coxiella burnetti. q denotes a question, and there are many to be answered in this rare, but not unknown, condition. take mr. m.c, a 45-year-old farm worker, who had an aortic valve replacement in 1994, for congenital aortic valvular disease. he next presented to medical services in 1999, with fevers, sweats, fatigue and weight loss. investigations at the time diagnosed autoimmune hepatitis, following liver biopsy. he commenced prednisolone and azathioprine. in 2002, again symptomatic, he had another aortic valve replacement. post-operatively, he required 6 weeks of antibiotics for a culture negative valvular infection. in 2003, still on immunosuppression, he developed culture negative meningitis, requiring 2 weeks of antibiotics. azathioprine was discontinued. a renal biopsy revealed proliferative glomerular nephritis in 2004, carried out for macroscopic haematuria. he commenced high dose prednisolone and cyclophosphamide. throughout this time, he regularly presented to medical personnel with high fevers, up to 40 c, present since 1999. they responded to steroids but relapsed on doses below 40 mg. in 2004, the fevers were investigated with a toe, and vegetations seen on the aortic graft. he was diagnosed with culture negative bacterial endocarditis, and subsequently tested positive for q fever. this case highlights the q behind q fever, and raises important issues for medical personnel. when should we remember it? when should we test for it? and what can we do to ensure high risk populations dont slip through the cracks, as this gentleman did? previous point prevalence studies of antimicrobial use in sch have consistently produced the same conclusions and recommendations pertaining to prescribing habits, highlighting doctors' failure to meet ideal standards of antimicrobial prescription. the aim of this study was to assess antimicrobial prescribing habits from the doctors' point of view, to compare this to available prescription data and to raise awareness of the principles of prudent antimicrobial prescribing. a multiple choice questionnaire was used to examine antimicrobial prescribing habits with regard to documentation of indication, documentation of a stop/review date, awareness of local empiric guidelines and other principles of prudent antimicrobial prescribing. 40 trainee and consultant doctors were surveyed. of those questioned, 38 % claimed they always ensure that an indication for commencing antimicrobial treatment is documented in the patient's healthcare record. only 10 % always document a stop/ review date when prescribing antimicrobials, while 69 % indicated that they had failed to do this at least once in the preceding month. 20 % of those surveyed sometimes or never consult local guidelines. when switching patients from intravenous to oral therapy, 90 % believed oral bioavailability to be an important factor, with only 45 % citing cost as being relevant. identifying doctors self-reporting of their deficits allows us to target appropriate interventions to these deficits. our survey identifies areas where awareness of diverted resources and safety issues could be used as a fulcrum for changing prescribing practices. we recommend formal teaching for doctors in this area, with particular emphasis on prudent prescribing and the correct use of empiric guidelines. venous thromboembolism (vte) is a cause of inpatient morbidity and mortality which may be reduced by appropriate thromboprophylaxis. it is well established that vte risk assessment and thromboprophylaxis prescribing may often be inadequate. recently it has been estimated that as many as 14,000 deaths per year due to hospital-acquired vte in england may have been prevented with appropriate prophylaxis [1] . in the current study, a cross section of inpatients was examined to establish concordance with current evidence-based guidelines for vte prophylaxis. data was collected from inpatient charts and drug kardexes relating to patients on three medical wards. laboratory data was also obtained from the hospital it system. data relating to patient mobility was obtained from medical charts, nursing staff, observation, and the patient themselves. sixty-three medical patients and 8 surgical patients (including one patient under obstetrics and gynaecology) were included in the study. 13 (61.9 %) out of 21 at-risk medical patients who were suitable candidates for thromboprophylaxis had sub-cutaneous heparin prescribed, whereas 2 out of 2 of the suitable at-risk surgical patients were prescribed thromboprophylaxis. 4 medical patients (6.35 %) and 4 surgical patients (50 %) were prescribed anti-embolism compression stockings. prescribing of thromboprophylaxis is relatively thorough in this patient population although it remains less than optimal. there exists some evidence of disagreement amongst clinicians regarding the optimum vte prophylaxis strategy [1] . implementation of hospitalspecific guidelines regarding thromboprophylaxis is recommended in keeping with recognised guidelines [2] . ali sheikh a, chandra r, gardezi a, o'hare j mid-western regional hospital, limerick background: good documentation represents good medical practice. objectives: to assess our current standard of documentation of allergy in admission notes, synchronicity with risk alert bands, information given by the patient, documentation on the front allergy alert section of medical notes and drug kardex. methodology: we assessed five parameters i.e. drug kardex, alert band, medical and admission notes and gathered information from each patient staying in medical and surgical services. there were 371 patients in hospital. results: 19 % of the patients admitted under medical and surgical teams had allergy or allergies to different drugs. 21 % had single allergy, whereas 81 % had multiple allergies. penicillin allergy was the commonest 7.2 % followed by opioids 3.7 %. furthermore, it is found that recording of allergies was under par as 24 % was on front page and 50 % appeared in medical notes. more than half of allergy information was found on drug kardex 77 %, patient knowledge 77 % and allergy bands 67 %. it is well documented that use of elastic compression stockings (ecs) prevents post thrombotic syndrome in patients with prior deep venous thrombosis (dvt). a 50 % reduction in these complications has been noted, with 2 year duration of therapy suggested [1] . the advice given to patients, their understanding of the benefits of this therapy and adherence issues has not been documented at sligo general hospital (sgh), this research aimed to address this. a short patient questionnaire was undertaken. this consisted of demographic information, and questions regarding the advice and use of ecs. the population consisted of patients with prior dvt attending the warfarin clinic at sgh. this data collection took place from october 2011-february 2012. the questionnaires were collated and results identified using microsoft excel with simple statistical analysis. eleven patients were included in the study, 36 % were not advised to wear ecs, and only 27 % wore the ecs daily. reasons for nonadherence include; difficulty fitting, discomfort and no benefit noted. improvement in adherence could be achieved if advice was given promoting use, the benefits explained, optimal frequency/duration of use advised and correct measurement. as research strongly supports use of ecs, it is essential adherence is encouraged to reduce the risk of post-thrombotic syndrome and future dvt. opiate injecting drug use is a well-established phenomenon in inner city dublin. the complications arising from this practice affect a predominantly young cohort of patients, who under different circumstances would be expected to enjoy good health. acute infections, acute vascular issues such as pseudoaneurysm, and chronic medical conditions such as hepatitis c and hiv are well recognised and frequently encountered by medical physicians who care for these patients. we present the case of a lady in her thirties with a long history of opiate injecting drug use. approximately 6 months prior to presentation, she underwent left sided pseudoaneurysm repair. she presented to the emergency department in a drowsy opiate induced state. on waking, she stated that she had lost some needles while injecting into her groins. plain radiology of pelvis revealed the two ''lost'' needles ( fig. 1) . on closer questioning she admitted to significant manipulation of the needle injecting path and angle in the weeks prior to presentation. she had attributed this to her previous surgery and the duration of her injecting drug use. follow-up duplex sonography revealed bilateral intact femoral arteries. further surgical management was non-operative with the focus on addiction counselling and further attempts at facilitating cessation of heroin use. bilateral ''lost'' needles is an unusual complication of injecting drug use and certainly would not rank as one of the protean manifestations of such practices. the aim of the study was to investigate patients' recall of their surgery and influencing factors. a questionnaire was given to patients at outpatient follow up and surgical details were recorded. 165 patients completed the questionnaire. the median age was 59 years (range 28-85) and the median follow up was 26 months (range 1-168). the extent of surgery did influence patients' recall with those having an alnd (n = 51, 31 %) having significantly more accurate recall of their surgery as opposed to those who had slnb (n = 114, 69 %), p = 0.007. the presence of ongoing postoperative symptoms also significantly improved recall, p = 0.004. almost half the patients who had slnb (46.5 %) could not accurately remember the extent of the surgery they had but 55.3 % were more careful of their arm or would not allow cannulation. the patient's consent process influenced patient accuracy. patients who filled the consent at both the outpatient consultation and in the hospital were significantly more accurate than those who had signed the consent at the clinic or hospital alone, p = 0.01. patients who have minimally invasive surgery, such as slnb are not accurate at recalling their surgery. this misinformation results in confusion over the subsequent vigilance of their upper limb. the consent process may have a role in improving patient recall. introduction: waiting times can exceed 100 days for general surgical clinics and can reach up to 18 months in different surgical specialities. many outpatient slots are lost by patients who do not attend (dna) to their scheduled appointment. we sought to ascertain whether a reminder text message (rtm) could decrease the number of patients who dna to surgical outpatients. methods: a single text message was sent to patients 4 days before their scheduled appointment, reminding them of the date and time of their upcoming surgical outpatient visit. this incentive was initiated in january 2011. outpatient appointment scheduling and attendances for a single surgical team were analysed over a 1 year period, encompassing two 6 month periods before and after implementation of the rtm service. data was exported to spss v17 for statistical analysis with p \ 0.05 considered statistically significant. results: over the 12 month period there were 1,287 scheduled outpatient appointments for the surgical service, with 653 attending prior to the implementation of the reminder text message service and the remaining 634 attending in the 6 months following its implementation. the percentage of dna patients did not differ significantly (21. classically, focussed assessment with sonography in trauma (fast) addresses a yes/no binary question as to whether fluid is present in the context of trauma. fast generally concentrates on four areas: perihepatic, peri-splenic, pelvic and a sub-xiphoid view of the pericardium. we report on two patients who were the victims of trauma. both patients had normal haemodynamic parameters. in both patients, the initial fast ultrasound scan was technically negative but it exhibited other signs of intraperitoneal injury. in the first case, a young gentleman sustained a penetrating injury to his right upper quadrant area. morison's pouch (the interface between the liver and the right kidney) did not exhibit any fluid. there was, however, a thin anechoic strip around the gallbladder. ct confirmed the suspicion of peri-cholecystic fluid and this patient required urgent laparotomy and repair of his hepatobiliary injury. in the second case, a gentleman in his thirties sustained a blunt injury to his left upper quadrant. ultrasonography exhibited heterogeneous echogenicity of the spleen. this patient proceeded to have urgent laparotomy and splenectomy for this shatter-type injury. as experience with fast techniques grows, the binary question of whether intra-peritoneal fluid is present becomes more nuanced. the objective of this audit was to review the hospitals compliance with hospital guidelines, to get an overview of how fluids are being prescribed in the hospital and to produce quality improvement plans. thirty drug kardexs were chosen randomly from wards around the hospital, both medical and surgical. if a kardex was found to have no fluid prescription, an alternative kardex was chosen in its place. note was taken on whether the prescription had the patient name and hospital number, the date, name, dosage and strength of the prescription, the route of administration and the frequency and rate of administration. the main areas of non-compliance were found to be: name: only 34 (30.4 %)orders out of 112 had the name on the order medical record number: only 34 (30.4 %)orders out of 112 had the mrn on the order, and the route of administration was not present on any of the 112 orders checked. in conclusion, this audit would suggest that there is a lack of compliance with detailing the patients name and mrn on fluid orders, that the route of administration was not written on any kardex, however the back page of each is exclusively dedicated to iv fluid prescription and also that non-approved abbreviations are being used when prescribing fluid orders. spontaneous hip fractures, or fractures without a fall have been described in up to 6 % (1, 2) of cases of hip fracture. an upsurge in such cases was recently observed in our emergency department. we present these in the form of a retrospective case series. patient 1 is a 43-year-old ex intravenous drug user who presented with non-traumatic right-sided hip pain over a period of weeks. initial plain films did not reveal fracture. over 1 week her symptoms deteriorated to the extent that she became unable to weight-bear. patient 2 is a 66-year-old gentleman with increasing left sided hip pain following a seemingly innocuous fall 3 months prior to index presentation. again initial radiographs did not reveal an abnormality. patient 3 is an 83-year-old bed-bound nursing home resident with end-stage alzheimer's disease. she was noted by nursing staff to have bilateral hip symptoms post seizure. the patient was unable to mobilise independently and had not fallen out of bed at any stage. patient 4 is a 29-year-old lady who presented with unilateral sacroiliac pain following a recent intensive exercise program including kickboxing 1 week previously. in each of these cases, subsequent review and plain films demonstrated fracture and in one case bilateral fractures secondary to seizure were demonstrated. our cases highlight the need for diagnostic vigilance and a structured approach in dealing with possible radiologically occult hip fractures, even in patients with no proximate antecedent history of trauma. delerium, or acute confusional state, is a common presentation to our emergency departments, and occurs in up to 30 % of hospitalised patients. we describe the case of acute deterioration in mental status, on a background of alzheimer's disease, with an interesting aetiology. mr k's family sought emergency medical review of 5 days deterioration; withdrawal, somnolence and general disorientation. he is a 73-year old with moderate alzheimer's disease. history and initial investigations were unremarkable. he was mildly dehydrated and physical exam showed only mild truncal ataxia. further investigations to elucidate cause included lumbar puncture, mri brain and immunological and vasculitic parameters. serology revealed human immunodeficiency virus (hiv) infection with acute seroconversion pattern. a history obtained with help of his family identified several casual heterosexual partners within past year. this included a contact who may be an intravenous drug user, with involvement in commercial sex work. symptoms abated within a week of admission, following pattern of hiv viral load. he has subsequently commenced antiretroviral therapy. this case highlights several areas of interest. sexual history is often overlooked in the older patient, which can be deleterious to outcomes. trends of hiv infection in ireland include primary infection in the older person, in addition to greater longevity of people infected in earlier adult life. we would advocate opt-out testing within the emergency department, and this is currently under study in our tertiary emergency department. comparison of comorbidities in patients with pre-diabetes to those with diabetes mellitus type 2 the management of type 2 diabetes and its complications are well researched. the prevalence of these complications in pre-diabetes has not been researched to the same extent. there has been no research comparing the prevalence of complications in pre-diabetes and type 2 diabetes in ireland. a cross sectional study performed on 309 pre-diabetes and 309 type 2 diabetes patients, selected from the diabetes interest group database (a database of the diabetic patients in 30 general practices in cork region) using stratified sampling for age and gender. a questionnaire was designed and completed in each practice assessing the presence of diabetes related complications in pre-diabetes and type 2 diabetes patients. data was analyzed on spss. the prevalence of complications was determined and the chi square test performed to see is there a statistically significant difference in the prevalence of these complications between pre-diabetic and type 2 diabetic patients. the prevalence of ischaemic heart disease and autonomic neuropathy is actually higher in pre-diabetes but the prevalence of renal disease and cerebrovascular disease is higher in type 2 diabetes. none of these differences in prevalence are statistically significant. the prevalence of peripheral vascular disease, eye disease and peripheral neuropathy is higher in type 2 diabetes, this difference being statistically significant. the prevalence of many of the complications in pre-diabetes is as high as in type 2 diabetes which may have implications for the screening and management of these conditions and the related comorbidities. the both groups were evenly matched. the median age was 73 and median homocysteine level was 11 (range 5-34.9). results: in group b, immediate clinical improvement was equivalent between the normal homocysteine group and treated hyhc group. median time to binary restenosis in hyhc was 29 months and in normal homocysteine was 50 months. p = 0.4335. secondary endpoints and all cause survival showed no significant difference. pre-treatment multivariate logistic regression for group a; depicts that hyhc is the main culprit of graft occlusion and limb loss p \ 0.0001. multivariate logistic regression for treatment group reports that corrected hyhc is no longer a significant factor of operative outcome. conclusion: patients with treated hyhc have similar outcomes compared to those with normal homocysteine. it is therefore crucial to measure homocysteine in all patients with cli and correct aggressively prior to intervention to improve outcomes. the efficacy of clinical guidelines in promoting co-prescription of bone protection with glucocorticoids among hospital doctors treating inpatients background: therapeutic glucocorticoids (gc) rapidly decrease bone mineral density, inducing a catabolic shift by promoting osteoclast differentiation and activation and by inhibiting osteocytes. current guidelines (1) direct that bisphosphonates (bp's) and calcium carbonate 1,200 mg (ca ++ co 3 -) with vitamin d 3 (vit. d 3 ) should be given at initiation of gc therapy as it is known that bone catabolism occurs early with steroid usage. we circulated these guidelines within our hospital after auditing the existing practice of the hospitals doctors and 1 year later we sought to measure the efficacy of our intervention by completing an audit loop. methods: a cross sectional audit was performed of all adult medical and surgical inpatients in a tertiary referral centre teaching hospital. it was noted if inpatients had been prescribed gc and if concurrent anti osteoporotic medication had been prescribed. subsequent to the initial audit, guidelines promoting the use of bp's, ca ++ co 3 and vit. d 3 when prescribing gc's were advertised on hospital notice boards, in hospital bulletins, hospital prescribing guidelines and on the hospital website. one year after publishing the new guidelines the audit loop was completed by performing a similar cross sectional audit. results: all inpatient medical records (n = 417) were reviewed in jan 2010 of whom 52 % were female and 58 % were older than 65. 66/417 (16 %) inpatients were prescribed gc's. ca ++ co 3 with vit. d 3 was prescribed for 20 % of patients on gc's with 2 % also receiving bp therapy. 3 % of patients were also receiving-post menopausal hormone replacement therapy. in nov 2011 1 year after guideline publication all 452 inpatient medical records (n = 452) were reviewed of whom 63 % were female and 60 % were older than 65. 55/452 (12 %) inpatients were prescribed gc's. ca ++ co 3 with vit. d 3 was prescribed for 55 % of patients on systemic steroids with 20 % also receiving bp therapy. creation and circulation of hospital guidelines resulted in an improvement in the co-prescription of ca ++ co 3 and vit. d 3 and bp's with gc's by the order of 2.35 and 10 respectively. however 45 % of patients on systemic steroids received no bone protection and 80 % received suboptimal bone protection from steroid induced osteoporosis. conclusion: publication and advertisement of current bone protection guidelines when prescribing systemic steroids resulted in a substantial but suboptimal improvement by hospital doctors in our hospital in the co-prescription of bone protecting drugs to prevent steroid induced osteoporosis. in this audit it appears that the majority of prescribers do recognise the necessity to protect bone health when a patient requires steroids. however a substantial number of patients did not receive any bone protection. it is our perception that most physicians are not aware that short courses of steroids reduce bone mineral density and therefore greater efforts must be made to enhance doctor awareness of the necessity for bone protection to be prescribed at initiation of systemic steroids. there is a trend towards longer total survival for jetflow tcvcs. these results suggest a potential advantage from using this line type, however, further study and formal cost analysis needs to be undertaken prior to changing our practice. with increasing resource restrictions, appropriate ordering of blood tests is vital for medical economic viability. this study evaluated the pattern and cost of thyroid function test (tft) requests and aimed to determine if tsh alone identifies thyroid abnormalities. a retrospective review of tfts performed on in-and-out-patients at a 350-bedded regional hospital was undertaken in january 2011, evaluating the number, results and costs of tsh, t4 and t3 levels. 4055 tsh, 3959 t4 and 28 t3 were ordered. 3456/4055 patients (85.2 %) were euthyroid. tsh abnormalities occurred in 526/4055 (13.0 %) ( table 1) only 82/4055 (2.0 %) patients had a normal tsh despite an abnormal t3 or t4 level. 57/82 (69.5 %) of these patients had known thyroid disease, undergoing treatment with thyroxine or thyroidblocking medications. 9/82 (10.9 %) had t4 levels \1 nmol/l outside the normal range and asymptomatic so were considered to be euthyroid. 16/82 (19.5 %) had a variety of diagnoses, for example, pituitary disease. tft reagents alone cost â�¬10,600. ir j med sci (2012) 181 (suppl 3):s83-s107 this study has identified that non-selective requests for t4 and t3 add little diagnostic value, except in certain circumstances like treatment of thyroid disease, in pregnancy or if pituitary disease is suspected. optimising tfts requests could save in the region of â�¬42,000/per annum. tsh alone would appear to be adequate for the majority of patients. case study: neurodegenerative disorders we present a case with an unusual combination of neurodegenerative disorders. a fit and healthy 60-year-old man, with no history of medical or psychiatric illness deteriorated progressively over a 10 year span, presenting initially with speech and language difficulties, followed by development of extra-pyramidal signs non responsive to levodopa. neurological permacol ã� mesh is an acellular porcine-derived dermal collagen surgical implant used in a wide variety of surgical reconstructions and repairs. we describe two cases where permacol ã� mesh was used to anchor the contents of the femoral triangle in patients undergoing radical block nodal dissection as part of the surgical management for metastatic penile squamous cell carcinoma, one of whom had an atrophied sartorius muscle due to previous infection with poliomyelitis. both patients underwent successful inguinal node dissections and femoral triangle repairs, with permacol ã� proving to be an effective means of protecting the femoral vessels in both patients despite complications related to wound healing secondary to a fixed flexion deformity in one patient. a 65-year-old gentleman, with a past history of vestibular schwannoma requiring a ventriculoperitoneal shunt (vps) was admitted with acute diverticulitis. his condition worsened and required a laparotomy for bowel perforation and faecal peritonitis. this case reports the successful perioperative management of the patient with a vps in situ in the setting of an emergency abdominal surgery. vps placement is an effective treatment of hydrocephalus, diverting cerebrospinal fluid (csf) into the peritoneal cavity. unfortunately, the shunt devices have a high incidence of malfunction mainly due to catheter obstruction or infection and are associated with various complications, 25 % of which are abdominal [1] . incidental pathology unrelated to the vp shunt can also occur such as appendicitis [2] , endometriosis [3] and diverticulitis as in this case. no standard current set of guidelines for perioperative management of vps exists for patients undergoing general gastrointestinal or urologic procedures with varying degrees of contamination [4] . this case reports successful and conservative management of a patient with a vp shunt that underwent contaminated abdominal surgery. there is no consensus on the management of vps in patients undergoing elective or emergent abdominal surgery and further studies are required in this area. the use of antithrombotic therapy on management of atrial fibrillation in an irish general practice malomo k 1 , breen n 2 , dunne l 3 , farrell g 3 , bryne p 3 1 ucd (university college dublin), ireland, now intern, mid-western regional hospital, limerick; 2 general practice, dublin, ireland; 3 pottersway medical centre, bunclody, ireland background and objective: atrial fibrillation (af) is a common cardiac arrhythmia associated with increased risk of stroke events [1] . to assess the use of antithrombotic therapy in patients with known af attending an irish general practice (igp) and use of stratification schemes to assess their suitability for oral anticoagulant therapy. methods and subject: permission to carry out the study was sort from university-college-dublin ethics committee. there were 161 patients with af attending the igp identified using the computerized disease coding system who international classification of disease (icd-10). thirty patients were diagnosed between 01/01/2009 and 21/01/2011 and their data from the computerized medical notes was used to calculate chads 2 , cha2ds 2 -vasc, has-bled scores and identify antithrombotic therapy they were using. results: there were 30 af patients. sixty-three percent (n = 19) were males and 37 % (n = 11) were females (ratio 1.7:1). twentythree percent (n = 7) of patients were aged \65 years, 27 % (n = 8) 65-74 years inclusive and 50 % (n = 15) =/[75 years. two patients with chads 2 score zero were on warfarin although one of them had cha 2 ds 2 vasc score of one. sixty-percent (n = 18) were on warfarin alone, 20 % (n = 6) aspirin alone, 14 % (n = 4) warfarin plus aspirin, 3 % (n = 1) aspirin plus clopidogrel and 3 % (n = 1) on warfarin plus clopidogrel. seven patients were not on warfarin for various reasons. the has-bled score revealed 7 patients at low risk, 12 moderate risk and 11 at high risk of bleeding. implications: ninety-three percent of patients were correctly managed and two patients were on warfarin with chads 2 scores of zero. the use of evidence based management guidelines is necessary to manage patients. keywords: atrial fibrillation, chads 2 score, cha 2 ds 2 vasc score, has-bled score meckel's diverticulum is the most common congenital abnormality of the gastrointestinal tract. only 16 % of meckel's diverticulum are symptomatic [1] . it can cause complications such as ulceration, obstruction, intussusception, haemorrhage and perforation and these complications are more common in the paediatric age group. a 16-year-old has a lifetime risk of 3.7 of developing a complication, this falls to zero over time [2] . adults most commonly present with bleeding [1] . we have a case of a 37-year-old male who presented with a 3 day history of abdominal pain, constipation and anorexia. on examination he had rif tenderness, but no signs of peritonism. a provisional diagnosis of appendicitis was made. the patient was taken to theatre the next morning for laparoscopy and appendicectomy. the appendix was normal and surgery proceeded to laparotomy. an inflamed and perforated meckel's diverticulum was found. a terminal ileum resection with side to side anastomosis was performed. the patient made an uneventful recovery and was discharged to opd follow up. this case illustrates the importance of further evaluation following normal laparoscopy in the case of the ill patient. references: neonatal graves disease is a rare condition, caused by transplacental transfer of thyroid stimulating antibodies from mother to fetus. 0.2 % of pregnant women have graves disease and 1.5 % of their offspring will have overt hyperthyroidism. a further 3 % will have biochemical thyrotoxicosis without symptoms. this is the case of a baby girl with neonatal graves disease. her antenatal course was uncomplicated until 39 weeks gestation. at this point, her mother became clinically thyrotoxic. maternal blood tests showed an elevated free thyroxine level (50 pmol/l) and positive thyroid receptor antibodies. a diagnosis of graves disease was made. she was commenced on treatment but remained thyrotoxic at the time of delivery. the baby was healthy at birth. however, thyroid function tests on day 2 of life showed an elevated free thyroxine (40 pmol/l) and thyroid receptor antibodies were positive. clinically, she remained asymptomatic and examination was normal. treatment with carbimazole was commenced and the dose titrated to maintain her euthyroid. most neonates affected by neonatal graves disease will have biochemical thyrotoxicosis but are clinically asymptomatic. the minority will be severely affected with goitre, eye signs, weight loss, tachycardia, arrhythmias and heart failure. it is a transient disorder, limited by clearance of maternal thyroid receptor antibodies and is usually self-limiting over 3-12 weeks. mortality rates of up to 20 % are reported in untreated cases, usually from arrhythmias and heart failure. this case emphasises the importance of close monitoring of pregnant women with a history of thyroid disorders, before and during their pregnancy, as well as monitoring their babies in the neonatal period. fibreoptic bronchoscopy is considered a safe diagnostic tool [1] . it is suggested however that post-bronchoscopy complication rate increases with age [2] . we decided to study the complication rate and the outcomes of bronchoscopy in patients over the age of 80 years in our institution. a retrospective review of the case notes of patients aged greater than 80 years who underwent bronchoscopy between september 2009 and november 2011 was carried out. data on complications experienced during and after bronchoscopy and the influence of the results on subsequent management of patients were collated and analysed. ninety-six patients were included. the mean age was 82.8 years (sd 2.98). thirty subjects (31.25 %) had a documented lung disease. fifty-nine patients (61.45 %) were current or ex-smokers. indications for bronchoscopy were; to evaluate for malignancy (93.8 %) and to evaluate for tb (6.2 %). post bronchoscopy complications were noted in eight (8.2 %) cases including hypoxia (3.1 %), infection (2.1 %), tachycardia (1 %) haemoptysis (1 %) and pneumothorax (1 %). six patients required treatment including nebulised bronchodilators (2.1 %), antibiotics (2.1 %), and oxygen therapy (2.1 %). malignancy was diagnosed in twenty cases (20.8 %). clinically significant pathogens were detected in six cases (6.2 %). as a result of bronchoscopy fourteen patients (14.6 %) had alterations to their drug therapy, three (3.1 %) received lung cancer treatment with curative intent, eighteen (18.8 %) had palliative care input, seventeen (17.7 %) were referred for further investigation and thirty-seven (38.6 %) had no change to their management. in conclusion, bronchoscopy is relatively safe and has good diagnostic utility in patients aged more than 80 years. patient records were identified from a database of patients who underwent a spinal mri to investigate spinal metastatic disease between november 2006 and april 2009. an analysis of the management of those diagnosed with mscc, specifically radiotherapy and/or surgical intervention was performed. three hundred and sixtyone patient records were identified with one hundred and seventy-one patients having metastatic spinal column disease. of these, thirty-four had mri evidence of metastatic spinal cord compression. radiotherapy alone was the most common therapy employed for patients with mscc. a multidisciplinary team approach was not taken in the majority of cases. a surgical opinion was sought in the minority of cases. this is not congruous with nice guidelines as a management protocol. the complexity of management decisions for metastatic spinal cord compression demands a multi-disciplinary approach. current practise in this major supra-regional cancer centre does not routinely employ this approach. a surgical opinion is sought in the minority of cases. this reflects the national trend with some centres having no spinal surgeons as staff. we recommend the establishment of a care pathway in order to comply with best evidence based practise as outlined by the 2008 nice guidelines. pet ct as a staging modality in primary cervical cancer; to establish the correlation between histological subtype and fdg-18 avidity of the primary lesion purpose: pet ct has become one of the mainstays of diagnostic imaging both in staging and prognosis of cervical cancer. we wanted to establish the link between fdg-18 uptake in the primary lesion and correlation with specific histological subtypes of cervical cancer including squamous cell carcinoma, adenocarcinoma and other rarer subtypes such as clear cell and adeno-squamous carcinoma. methods and materials: the main audit involved working out the fdg uptake in the primary lesions from the cervical cancer database of patients. the patient list was derived from a database of patients collated by the gynaecological services at sjh of all patients who received workup and treatment for cervical cancer from 2006-2011. the computer system at sjh was employed for analysing pet-ct reports and histology reports. microsoft excel was used to store this information parameters and complete statistics on the data. results: the results of this study are to follow. conclusion: there is a correlation between fdg avidity and histological subtype of cervical cancer and this provides valuable information on the reliability of pet-ct findings in a specific cohort of patients with cervical cancer. we present the case of a 44-year-old male with a primary piriform fossa squamous cell carcinoma (scc) who attended for staging positron emission tomography/computerised tomography (pet/ ct) scan. distant to the primary lesion, focused f 18 fluorodeoxyglucose (fdg) uptake was noted in the left iliac bone, without underlying abnormality on the accompanying ct scan. low grade uptake was also noted in subcentimetre upper mediastinal nodes, without any underlying lung parenchymal abnormality. these nodes were felt to be inflammatory or reactive in origin. though an unusual pattern for metastatic head and neck scc, the left iliac bone lesion was concerning for malignancy. thus, a percutaneous biopsy of this region was performed under image guidance. histology revealed non caseating epithelioid granulomata consistent with sarcoidosis. the patient was subsequently able to have potentially curative treatment of his head and neck primary. discussion: sarcoidosis is a chronic inflammatory multisystem condition characterised by the presence of non-caseating granulomas in affected organ tissues. it commonly affects young and middle aged adults with a slightly higher prevalence in women. the disease shows a predilection for adults under 40, peaking between 20 and 29, with a second peak in women over 50 [1] . despite its unknown aetiology, it is felt that t lymphocytes play a central role in the development of sarcoidosis, as they likely propagate an excessive cellular immune reaction. it has been shown that abnormalities with the cd4/cd8 ratio and production of t helper 1 and 17 (th 1/th 17) cytokines such as interferon and tumour necrosis factor (tnf) are found in sites of disease activity [2] . the importance of tnf in sarcoidosis is demonstrated by the efficacy of anti-tnf medications such as pentoxifylline and infliximab [3] . it is estimated that bone lesions occur in 1-13 % of sarcoidosis patients [4] . these figures are however based on radiographic data and are likely an underestimate as the majority of bone lesions would be asymptomatic [5] . varying osseous manifestations of sarcoid have been described; punched out lytic lesions, lace-like destruction and subperiosteal resorption mimicking hyperparathyroidism. commonly, the small bones of the hands and feet (predominantly the middle and distal phalanges) are involved often bilaterally and symmetrically. while pulmonary involvement occurs in 90 % of patients with sarcoidosis [6] , bony involvement is rare without other clinical manifestations of the disorder [4] . indeed our patient had low grade subcentimetre mediastinal nodes. the fdg avidity of sarcoid is a well documented phenomenon. indistinguishable from metastatic disease on f 18 fdg pet scan alone it can lead to false-positive appearance of metastatic disease on pet/ ct. furthermore one-third of pet/ct positive sarcoidosis have osseous abnormalities on pet/ct the majority of which will not be evident on low dose ct [7] . this case serves to remind us of the diagnostic limitations of f 18 fdg pet in the differentiation of inflammatory and metastatic processes. in a patient with an unusual pattern of 'metastatic' disease tissue diagnosis is a necessity. distinct islet auto antibodies against antigens insulin, gad65, ia2 and znt8 have been identified. the presence of autoantibodies has been shown to be predictive of reduced beta cell mass. international data suggests that 85-90 % of patients with newly diagnosed t1dm are positive for at least one of the above antibodies. our aim is to study the prevalence of autoantibody positivity in our population of children with newly diagnosed t1dm over a 5 year period (2007) (2008) (2009) (2010) (2011) . details of all children newly diagnosed with t1dm were collected using the endocrinology department database and chart review was undertaken. children diagnosed elsewhere whose care was transferred to our centre and children who had non type 1 diabetes were excluded. one hundred and thirty-six children were diagnosed with t1dm in our centre, of which 37 (27 %) presented in diabetic ketoacidosis. age at diagnosis ranged between 10 months and 15.7 years. the male to female ratio was 1:2. other autoimmune conditions (coeliac disease, hypothyroidism, addison's) were present in 11 %. ninety-six percent (n = 130) were tested for one of the three antibodies. 76 % were positive for at least one antibody, 30 % positive for two, 5 % positive for all three antibodies. the most common antibody found was anti gad (60 %). positive autoantibodies are helpful in confirming the presence of t1dm and their absence in raising the possibility of monogenic diabetes. the absence of pancreatic islet autoantibodies at diagnosis can be predictive for maintained beta cell function during the 2 years after diagnosis. maternal obesity, based on a body mass index (bmi)[29.9 kg/m 2 , is associated with increased pregnancy complications. moderate exercise during pregnancy is associated with decreased complications such as pre-eclampsia [1] and gestational diabetes mellitus [2] and has a beneficial effect on mood with those who exercise experiencing fewer symptoms of depression and anxiety both during and after pregnancy [3] . the purpose of this study was to determine if obese women exercise less during pregnancy. we recruited 110 women at their convenience after a routine scan confirmed an early ongoing pregnancy. maternal height and weight were measured accurately and bmi calculated. women completed the international physical activity questionnaire. of the 110 studied in early pregnancy, 10.9 % took no exercise, 58.2 % walked only, 21.8 % undertook moderate exercise and 9.1 % undertook vigorous exercise. of the obese women (n = 20), only 10 % reported moderate-vigorous exercise in early pregnancy compared with 34.5 % in women from the normal bmi category (n = 55). also 15 % of the obese group reported doing no exercise compared with 10 % of those with a normal bmi. women with a bmi of 30 or more were found to sit for an average of 453 min per day whereas those with a normal bmi sit for 320 min per day on average. although bmi increases with age and parity, these variables were not found to influence exercise levels in early pregnancy. exercise may be physically challenging in obese women, particularly if morbidly obese, but due to its beneficial effects it should be encouraged antenatally in all pregnant women irrespective of their bmi category. references: angiogram showed an absence of coronary artery disease and echocardiogram ruled out structural abnormality. exercise stress test showed short runs of vt in recovery. further tests included ajmaline and adrenaline challenges. cardiac mri showed right ventricular outflow tract scarring consistent with either a primary diagnosis of arvc or secondary with that of myocarditis. sarcoidosis was outruled by further laboratory and radiological means. non-sustained runs of vt on telemetry were noted and a dual chamber implantable cardiac defibrillator was placed. on discharge, medication included atenolol 100 mg daily and patient will undergo genetic screening. follow up for the siblings included phenotyping and mri. discussion: history, presentation and pathology uncovered are consistent with a diagnosis of arvc. suspected paternal inheritance of an autosomal dominant genetic defect predisposed to the ventricular arrhythmias which at first, manifested as self-limiting palpitations however, later caused a near fatal event. long term management may include cardiac transplantation. prevalence of diagnosed atrial fibrillation in adults: national implications of rhythm management and stroke prevention: the anticoagulation and risk factors in atrial fibrillation (atria) study complications of fiberoptic bronchoscopy at a university hospital the relationship between age and process of care and patient tolerance of bronchoscopy central skeletal sarcoidosis mimicking metastatic disease sarcoidosis is a th1/th17 multisystem disorder osseous sarcoidosis treated with tumour necrosis factor-inhibitors: case report and review of literature. spine (phila pa 1976) musculoskeletal manifestations of sarcoidosis multiple atypical bone involvement in sarcoidosis imaging in sarcoidosis. semin respir crit carre med f-18 fdg pet/ct for detecting bane and bone marrow involvement in sarcoidosis patients poster 42 diaphragmatic rupture: delayed diagnosis and its consequences-a case report diaphragmatic rupture: a frequently missed injury in blunt thoracoabdominal trauma patients diaphragmatic rupture due to blunt trauma: sensitivity of plain chest radiographs the introduction: lymphoscintigraphy has been shown to be accurate in identifying sites of potential nodal metastases in melanoma patients. recent guidelines published by the eortc-eanm have defined specific criteria with relation to performing lymphoscintigraphy in melanoma patients. methods: the aim of this study was to audit all patients with malignant melanoma who underwent sentinel lymph node biopsy (slnbx) and lymphoscintigraphy in university college hospital galway between 2005-2010. results were compared with eortc-eanm recommendations. results: 189 melanoma patients underwent slnbx during the study period. 121 patients had preoperative lymphoscintigraphy using intradermal injections of technetium 99 m. sentinel nodes were identified in 102 of 121 patients (84.3 %) on lymphoscintigraphy. 66.94 % of lymphoscintigrams were reported on the same day as the procedure, 23.97 % after 1 day and 9.09 % greater than 1 day postop. obligatory imaging, as defined in the eortc-eanm guidelines, was obtained in 91 % of all patients undergoing lymphoscintigraphy. no nodal uptake was reported in 18 patients, 14 of whom received imaging in accordance with the guidelines. the location of those melanomas with no nodal uptake was 44.4 % on the head and neck, and 38.9 % on the trunk. the overall rate of false-negative lymphatic mapping and sentinel node biopsy was 5.2 %. in patients receiving lymphoscintigraphy the false negative rate was 3.7 versus 7.9 % in patients who did not have lymphoscintigraphy. conclusion: preoperative lymphoscintigraphy is an essential adjunct in identifying the sentinel lymph node in clinically node negative melanoma patients and should adhere to eortc-eanm guidelines. conflict of interest: none. on examination, she was alert, hr 160, bp 105/60. she was tachypnoeic, but reported this to be her baseline. there was a palpable, non-reducible mass in the left upper quadrant. a chest x-ray showed loops of bowel above the diaphragm. ultrasound showed an abscess in the rectus sheath, which drained mucopurulent fluid.mb opted not to have the diaphragm repaired, despite medical advice. she was readmitted 2 weeks later with a recurrence of the abscess. her clinical condition deteriorated, with severe abdominal pain, and oxygen saturations of 70 %. an emergency laparotomy was performed, which showed an obstructing lesion in the descending colon, with large and small bowel above the diaphragm. she had an extended right hemicolectomy, with restoration of bowel to the abdominal cavity and mesh repair of the diaphragm. histology showed an descending colon adenocarcinoma, t3n0m0.traumatic diaphragmatic rupture is a rare problem, occurring in 1-8 % of blunt and penetrating traumas. (1) plain films and ct scans are not always diagnostic in the acute phase, due to concomitant injuries. (2) repair is essential once diagnosis has been reached to avoid herniation of abdominal viscera. patients with ongoing dyspnoea after blunt trauma may benefit from a repeat chest x-ray. a 75-year-old retired veterinary surgeon was referred to tertiary referral with a 2 months history of a painless enlarging neck mass. clinical examination showed a right side neck mass approximately 7 cm 9 5 cm in size which extended through both anterior and posterior triangles. cervical lymphadenopathy was not appreciated and the patient was clinically euthyroid. patient was admitted under the care of the maxillofacial service, where he underwent a needle core biopsy of the neck mass. this was returned showing poorly differentiated spindle cell tumour with large pleomorphic nuclei and abundant abnormal mitoses. the immunoprofile was consistent with metastatic poorly differentiated sarcomatoid carcinoma and the differential diagnosis included origin fro the kidney, lung or thyroid.the case was discussed at the head and neck mdm and a consensus was reached that the patient as developed a sarcoma of the neck, with a level 5 neck dissection the most appropriated intervention.intraoperatively, following the removal of the neck mass it was noted that the right lobe of the thyroid was enlarged. an intra-op fna was performed on the mass in the right lobe of the thyroid. the fna was returned showing bizarre giant cells, suggestive of malignancy. ultimately the patient underwent a total thyroidectomy but, despite surgery the patient died 19 weeks post-operatively. using feedback from the pilot study and analysis of the preexamination consultant and registrar-led teaching schedule for students a further 'intern-led' tutorial timetable was structured. it allowed for a weekly maximum of 9 h of teaching dependent on demand and intern availability. programme duration was 10 weeks, january to march 2012. group sizes were a maximum of 8 students. tutorials were all at the patient bedside. feedback forms were distributed at the end of the programme.sixty-four tutorials were given in total. seventy feedback forms were returned. mean number of tutorials attended per student was 5.8. students rated statements 1-5 (1-strongly disagree, 2-disagree, 3-neutral, 4-agree, 5-strongly agree). median scores were used. scoring showed improvements were made from last year in terms of level of intern preparation for tutorials and importantly, the students own subjective view of their level of preparation for forthcoming examinations. most importantly, students agreed that tutorials improved their history taking skills and strongly agreed that their examination skills improved. matching feedback from the pilot study, students strongly agreed that intern-led teaching is an appropriate adjunct to the final year programme.of the 47 intern working in st james's hospital, 22 participated. seventeen of these had received tutorials on the pilot programme. of the 25 that did not participate, many had never received formal intern teaching.the feedback obtained from the pilot study was invaluable in organising and delivering this teaching programme. ongoing improvements will be made for next year based on this audit. this also highlights that the intern-teaching tool is extremely beneficial, yet largely underused. key: cord-015947-kgyl052w authors: oommen, seema title: emerging respiratory pandemics date: 2016-02-22 journal: clinical pathways in emergency medicine doi: 10.1007/978-81-322-2710-6_45 sha: doc_id: 15947 cord_uid: kgyl052w since the identity of the respiratory pathogen is not known at the time of admission, emergency department personnel and intensive care staff are at the highest risk of exposure while handling such patients. swine fl u is the popular name of the relatively new strain of infl uenza a/2009/ h1n1 that caused a pandemic which began in mexico and spread rapidly from there across the world in 2009-2010. more than 600,000 laboratory-confi rmed cases were reported from more than 200 countries worldwide as of march 2010 with a total of 18,449 deaths as reported by the world health organization (who) in august 2010 [ 2 ] . this is considered an under-representation of the actual numbers as many deaths were never tested or recognized as infl uenza related [ 2 ] . meanwhile new cases of h1n1 are being diagnosed worldwide including india in 2014. the who estimates a total of 676 laboratory-confi rmed human cases of avian fl u (h5n1) infection and 398 related deaths in 16 countries from 2003 to 2014 [ 3 ] . avian infl uenza viruses are divided into the high pathogenicity h5n1 virus with 100 % mortality in the poultry and low pathogenicity h7n9 viruses not associated with severe disease in poultry. cases of h7n9 are reported mainly from the people's republic of china. the most recent respiratory illness was fi rst reported in saudi arabia in 2012 and is of a new strain of coronavirus called the middle east respiratory syndrome coronavirus (mers-cov) that shares a genetic relatedness to a similar virus found in camels. by june 2014 there were around 699 laboratory-confi rmed infections with 209 deaths [ 4 ] . the majority of these cases were from the middle east countries with few cases in the usa, europe, malaysia and the philippines in asia. thus the crude case fatality rate of h5n1 is highest at 60 %, followed by mers-cov (30 %), sars-cov (10 %) and the least h1n1 (0.5 %), the latter being most likely under-represented [ 1 -4 ] . adaptation of the viruses by mutation or reassortment leading to an ability to cross the species barrier into humans, the capability to become established in humans and a sustained ability to pass from one human to the other are the three features needed to cause an infectious disease of epidemic proportions. combine this with the increased mobility of individuals across the world; transfer of these infections from one part of the globe to the other can take place in a relatively short period of time. direct lysis of the host cells is one of the mechanisms of host tissue damage. more important are the indirect consequences of the host immune response which get disrupted, tipping the balance from being favourable to an exaggerated and destructive host immune response leading to an outpouring of pro-infl ammatory chemokines and cytokines termed as 'the cytokine storm' [ 5 ] . cytokines like tumour necrosis factor alpha (tnf α), interleukin 6 (il-6), il-1β and il-8 play a major role in tissue damage [ 5 ] . of this il-1β has been found to be the main cytokine in the broncho-alveolar lavage (bal) fl uids of patient with lung injury [ 5 ] . the net result is local diffuse damage to the alveoli (acute lung injury -ali) due to increased arrival of leucocytes, dilatation of blood vessels and tissue oedema and can swiftly progress to the more severe acute respiratory distress syndrome (ards). spillover of these cytokines into the systemic circulation leads to multisystem organ failure and fi nally death. there is a considerable overlap in the clinical presentation by the common respiratory viruses and other atypical causes of community-acquired pneumonia making arousal of suspicion in the treating physician of an emerging epidemic virus unlikely. hence suitable samples may not be collected at the appropriate time leading to misdiagnosis and a delay initiation of therapy. • the common clinical presentation [ 1 , 6 , 7 ] of most respiratory pandemic viruses is that of an 'infl uenza-like illness (ili)': an acute respiratory infection with sudden onset of fever (temperature of >38 °c or >100.4 °f), chills, myalgia and a non-productive cough. sore throat and rhinorrhoea may also be present. many cases are associated with gastrointestinal symptoms like abdominal pain and diarrhoea. • a history of contact, in the preceding 10 days of symptom onset with poultry or with a known case in the countries detected to have human avian infl uenza cases, has to be elicited. likewise, history of travel to the middle east countries should arouse suspicion of a mers-cov infection. • cases may range from a mild ili to a fulminant viral pneumonia. rapid clinical deterioration may occur with diffuse viral pneumonitis with hypoxaemia, acute respiratory distress syndrome (ards), septic shock, multisystem organ failure and death occurring within a week of onset of illness [ 7 ] . • secondary bacterial pneumonia especially due to staphylococcus aureus, s. pyogenes and s. pneumoniae is a common complication with infl uenza [ 7 ] . • extremes of age; pregnancy; obesity; presence of pulmonary, cardiac, hepatic, renal or metabolic co-morbidities; and underlying neurological conditions are the common risk factors for severe disease [ 1 , 6 ] . • case fatality of h5n1 is much higher than seasonal infl uenza viruses with rapid clinical deterioration mainly due to early involvement of the lower respiratory tract. considering the rapid spread of virus in the past within hospitals and community, methods to rapidly identify infected cases are of utmost importance. • the real-time-based polymerase chain reaction (rt-pcr)-based assay is one such means which has proven its worth both during the sars-cov and the h1n1 outbreak. the only caveat is that appropriate clinical samples need to be collected at the appropriate time during the disease and should be transported to the laboratory in cold chain in a viral transport medium so as to maintain the viability of the nucleic acid. • multiplex pcr can detect simultaneously other viruses causing a similar clinical picture like the seasonal infl uenza a and b, respiratory syncytial virus (rsv) and human metapneumovirus. • the most preferred specimen is a nasopharyngeal aspirate or a swab preferably within 1-2 days of onset of disease [ 6 ] . cotton swabs are not recommended due to presence of inhibitors; rayon-or nylon-fl ocked swabs are used instead. broncho-alveolar lavage, tracheal aspirates and sputum which contains the highest viral loads are the ideal specimens especially later in the course of illness [ 6 ] . • rt-pcr tests may be carried out on serum specimens. • in case of suspicion of mers-cov or sars-cov, stool specimens may also be sent to the laboratory. • many a times these newer molecular assays may not be available even in established diagnostic molecular laboratories and the specimen may have to be shipped under strict biohazard protocols to the national or a regional reference centre for testing. • rapid diagnostic tests available for the diagnosis of infl uenza have high specifi city but low sensitivity and hence a negative result should be interpreted with care [ 7 ] . • though viral cultures don't play a signifi cant role in rapidly diagnosing cases, it is important in confi rmation of emerging and re-emerging cases of viral infection, epidemiological typing of isolates and research into vaccines and newer drugs. • clinical signs on examination are minimal when compared to the radiological fi ndings of the chest. chest x-rays typically show diffuse interstitial infi ltrates, unilateral or more commonly bilateral ground-glass opacities to focal consolidation that is seen early in the disease [ 1 , 6 , 7 ] . these opacities are usually seen in the lower lungs fi rst and may become widespread affecting larger areas as the disease progresses. high-resolution computed tomography may be required in ambiguous cases. • tests done to rule out other infectious aetiology include blood cultures, gram's stain and culture of the sputum and urinary antigen detection for legionella and pneumococci. acute and convalescent serum samples may be collected for antibody detection of various pathogens. healthcare personnel should be on high alert in the present global climate for any clustering of similar cases. picking up a probable epidemic early in its course may limit the spread of the infection within the hospital and the community. treatment is largely supportive for uncomplicated cases, also bed rest and maintenance of hydration, in addition to analgesics and antipyretics. severe cases require supportive measures including ventilation and antibiotics for secondary bacterial infections. • oseltamivir and zanamivir [ 6 , 8 ] are neuraminidase inhibitors that decrease the release of infl uenza viruses from infected cells, thus limiting its spread. it has been used extensively in the 2009 h1n1 pandemic. resistance to oseltamivir has been documented [ 8 ] . best results were obtained when treatment was started within 48 h of symptom onset even before the availability of laboratory results. • the dosage of oseltamivir for persons above 13 years of age and >40 kg weight is 75 mg twice daily for duration of 5 days. • for children <15 kg, the dose of oseltamivir is 30 mg twice a day, 15-23 kg is 45 mg twice a day and >23-40 kg is 60 mg twice daily. • zanamivir is advised for persons above 5 years of age at a dose of 10 mg (two inhalations) twice a day. oseltamivir is the drug of choice to treat human cases of avian infl uenza. • unlike infl uenza there is no specifi c antiviral or vaccine available for the coronaviruses. a combination of ribavarin and interferon 1α shows synergistic action in vivo and has been used to treat mers-cov and sars-cov infections but limited data is available on their effectiveness to combat the disease and clinical trials are needed to demonstrate their effectiveness [ 1 ] . • steroids were used during the sars outbreak to limit the cytokine-mediated lung injury in conjunction with ribavirin but the actual role of steroids has to be elucidated with further studies. steroids are contraindicated in cases of infl uenza pneumonia as it may further predispose to secondary bacterial infection. • the incubation period for most infl uenza viruses including h1n1 is 1-4 days [ 7 ] , whereas the incubation period for h5n1 is slightly longer ranging from 2 to 8 days. • patients with infl uenza are most infectious in the fi rst 2 days of the onset of illness averaging from a day before the onset of symptoms to 5-7 days after the onset of illness. • the incubation period of coronaviruses like sars-cov and mers-cov is around 2-14 days. in contrast to infl uenza cases, they transmit the virus usually after the fi fth day of the onset of disease when viral load maximizes in the nasopharyngeal secretions [ 1 ] . 1. immunoprophylaxis [ 9 ] : exists currently only for infl uenza. it is advised by the advisory committee on immunization practices (acip) that all persons over 6 months of age be vaccinated annually against the predicted infl uenza strains which are most likely to cause infections in the next infl uenza season based on surveillance data. it is available as an annual infl uenza vaccine incorporating three or four live attenuated or inactivated infl uenza strains. it is available for administration as nasal sprays (live attenuated vaccine) and the intramuscular or intradermal route (killed vaccine). 2. chemoprophylaxis [ 7 ] : • oseltamivir and zanamivir (neuraminidase inhibitors) are active against both infl uenza a and b viruses. it is indicated in exposed unvaccinated immunocompromised persons or people with co-morbid conditions who are at a high risk of developing infl uenza. • oseltamivir should be given within 1 day after exposure at a dose of 75 mg once daily for persons 13 years and above of age for a minimum of 10 days after exposure to a recent contact with a known case of infl uenza. • zanamivir is prescribed at two inhalations once daily for people above 5 years of age. if the exposed person develops respiratory symptoms, he should be given treatment doses of the drug. • in case of h5n1, close contacts of strongly suspected cases of human avian infl uenza and personnel handling infected poultry are advised oseltamivir as chemoprophylaxis [ 10 ] . 3. standard contact , droplet and airborne precautions [ 11 ] : oftentimes, emergence of an infection of pandemic potential is not routinely expected by physicians and staff in their regular days' work. but going by the past experience especially in case of sars-cov, healthcare personnel were the ones at high risk of infection given the close proximity to the patient. hence it is important that all staff follow the standard contact and droplet precautions for any case suspected to have a respiratory infection. • contact and droplet precautions include wearing of personnel protective equipment (ppe) such as gloves, gowns, eye and face shields. • there is special emphasis on hand hygiene which must be diligently performed before and after contact with the patient, the potentially infectious material generated by him, before wearing and after removing ppe. • airborne precautions include placement of patients in an airborne infection isolation room (aiir) and wearing of n95 or greater respirators and masks. airborne transmission is especially possible while suctioning a ventilated patient, bronchoscopy, sputum induction, intubation and extubation and cardiopulmonary resuscitation. • pending placement of patient in the aiir a face mask must be placed on the patient and the patient isolated in a single room to prevent spread of infection. • environment infection control must be followed per hospital infection control policy using a suitable disinfectant for disinfection and collection, transport and treatment of all infectious waste generated. patient presenting with 'influenza like illness' and chest radiograph showing signs of pneumonia necessitating hospitalization inform district heath authorities. always maintain personnel protection: contact, droplet and airborne precautions to be practised. elicit travel history to middle eastern countries for mers-cov and or history of contact with poultry for h5n1. physicians and staff to be alert on clustering of similar cases in the recent past collect appropriate specimens to confirm diagnosis and to rule out alternate diagnosis. severe acute respiratory syndrome coronavirus as an agent of emerging and reemerging infection infl uenza at the human-animal interface. summary and assessment as of 4 middle east respiratory syndrome coronavirus (mers-cov) summary and literature updateas of 11 into the eye of the cytokine storm two years after pandemic infl uenza a/2009/ h1n1: what have we learned? writing committee of the who consultation on clinical aspects of pandemic (h1n1) clinical aspects of pandemic 2009 infl uenza a (h1n1) virus infection antiviral agents for the treatment and chemoprophylaxis of infl uenza. recommendations from the advisory committee on immunization practices (acip) prevention and control of seasonal infl uenza with vaccines who rapid advice guidelines on pharmacological management of humans infected with avian infl uenza a(h5n1) virus. available at guideline for isolation precautions: preventing transmission of infectious agents in healthcare settings key: cord-236070-yao5v598 authors: carneiro, carlos b.; ferreira, i'uri h.; medeiros, marcelo c.; pires, henrique f.; zilberman, eduardo title: lockdown effects in us states: an artificial counterfactual approach date: 2020-09-28 journal: nan doi: nan sha: doc_id: 236070 cord_uid: yao5v598 we adopt an artificial counterfactual approach to assess the impact of lockdowns on the short-run evolution of the number of cases and deaths in some us states. to do so, we explore the different timing in which us states adopted lockdown policies, and divide them among treated and control groups. for each treated state, we construct an artificial counterfactual. on average, and in the very short-run, the counterfactual accumulated number of cases would be two times larger if lockdown policies were not implemented. the evolution of the covid-19 has been posing several challenges to policymakers. decisions have to be made in a timely fashion, without much undisputed evidence to support them. being a new disease, and despite the enormous research effort to understand it, estimates of the transmission, recovery and death rates remain uncertain. nevertheless, these are key pieces of information to assess potential pressures on the health system capacity, as well as the need of a lockdown policy and its intensity if implemented. not surprisingly, similar regions have implemented different strategies regarding lockdowns. the leading example in the media is the looser social distancing policy in sweden versus strict policies in its scandinavian peers. by informally comparing the evolution of the pandemics in sweden and denmark (or norway), many commentators argue that several covid-19 cases and deaths in sweden would be avoided in the short-run were a strict lockdown in place. 1 aiming to provide a quantitative assessment on the short-run effects of lockdowns, this paper takes this exercise seriously in the context of us states. given that the timing us states adopted lockdown policies differs among them, we adopt techniques based on synthetic control (sc) approach of abadie and gardeazabal [2003] and abadie et al. [2010] to assess the impact of lockdowns on the short-run evolution of the number of cases and deaths in the treated us states. 2 more specifically, we consider an extension of the original sc method called artificial counterfactual (arco) which was put forward by carvalho et al. [2018] . due to the nonstationary nature of the data, the correction of masini and medeiros [2019] is necessary. our results point to a substantial short-run taming of the cumulative number cases due to the adoption of lockdown policies. on average, for treated states, the counterfactual accumulated number of cases, according to the method adopted here, would be two times larger were lockdown policies not implemented. a key feature of our approach is that it is purely data-driven. in the beginning of the crisis, the majority of papers written by economists to evaluate the effectiveness of lockdowns relied on epidemiological models for analysis, including the most recent ones that incorporate behavioral responses. 3 these models are hard to discipline quantitatively. many calibrated parameters remain uncertain, 4 and models that incorporate behavioral responses need time to mature and agree on a reliable set of ingredients and moments to be matched. model-free approaches like ours or medeiros et al. [2020] should complement policy discussions or forecasting exercises based on those models, especially from a quantitative point of view. there are related papers using state or county level us data. 5 at least one of them, , uses a synthetic control approach but it is restricted solely to california. other papers, such as brzezinski et al. [2020] , and sears et al. [2020] , use variations in the timing of statewide adoption of containment policies, and difference-in-differences models to document substantial reductions in mobility and improvements of health outcomes. the key identification assumption in these papers is that variations in the timing are random after controlling for covariates. brzezinski et al. [2020] also consider an instrumental-variable approach. fowler et al. [2020] and grassi and j. sauvagnat [2020] follow similar empirical strategies but at county level, and also find substantial reductions in cases and fatalities in counties that adopted stay-at-home orders and state-mandated business closures, respectively. our analysis, that rests on alternative identification assumption and method, should be seen as complementary. as the pandemic evolves, and more data become available, we expect more related empirical evidence to be consolidated. the paper is organized as follows. section 2 describes the data, while section 3 presents the empirical strategy. the results are discussed in section 4. finally, section 5 concludes the paper. additional results are included in the appendix. data on covid-19 (confirmed) cases are obtained from the repository at the johns hopkins university center for systems science and engineering (jhu csse). we consider the cumulative cases for a subset of the 50 us states and the district of columbia. instead of using the chronological time across the states, we consider the epidemiological time, which means that the day one in a given state is the day that the first covid-19 case was confirmed there. the econometric approach adopted here relies on the fact that some states adopted a lockdown strategy (the treatment), whereas others did not adopt social distancing measures (control group) and are used to construct the counterfactual. 6 lockdown strategies include a mix of state-wide non-pharmaceutical measures aiming to limit social interactions, such as restrictions on nonessential activities and requirements that residents stay at home. containment policies. 4 see, for example, atkeson [2020a] on the uncertainty regarding estimates of the fatality rate. 5 there are also related papers for other countries. for example, fang et al. [2020] for china. 6 the timing of those policies at each state were obtained, and double checked, in several press articles, e.g., https://www.businessinsider.com/us-map-stay-at-home-orders-lockdowns-2020-3 and https://www.nytimes.com/interactive/2020/us/coronavirus-stay-at-home-order.html. in this section, we describe how we assign states to control and treatment groups, and then, describe the method used to construct the counterfactuals. aiming to balance control and treatment states, and at the same time obtain enough observations to estimate properly the model before the lockdown policy was implemented, we divide us states into three groups. for a state to be included in the analysis, a state-wide lockdown policy must be established at least twenty days after the first case. we assume that whenever an individual becomes infected, it takes an average of ten days to show up as a confirmed case in the statistics. 7 hence, the in-sample period used to estimate the synthetic control ("before" the lockdown policy) for each treated state (to be defined below) is the number of days between the tenth day after the first confirmed case and the tenth day after the lockdown strategy was implemented. we choose to start the in-sample from the tenth day as a way to smooth the initial volatility of the data. we adopt a criteria that a state must have at least twenty observations in the in-sample period to be included in the analysis. this criteria excludes states that adopted a state-wide lockdown strategy too early, such as connecticut, new jersey, ohio, among others. these are the unmarked states in table 1 , which reports the dates of the first case and lockdown policy, as well the difference in days between them, and also helps visualize the three groups of states. the remaining states must be divided into treated and control groups. the idea is to find a synthetic control for each of the treated states. the group of potential controls should consist of states that adopted a lockdown policy too late (or never adopted), such that counterfactuals are not contaminated by lockdown policies implemented in those states. at the same time, and for a similar reasoning, the lockdown strategies adopted in treated states must be in place during the period of analysis. 8 7 this assumption is motivated by the incubation period of the virus. according to the world health organization, the "[...] the incubation period for covid-19, which is the time between exposure to the virus (becoming infected) and symptom onset, is on average 5-6 days, however can be up to 14 days." see https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200402-sitrep-73-covid-19.pdf. 8 in the appendix a.1, table a .1 shows the reopen dates for the treated states. fortunately, there are horizons that can balance both goals: enough states to build the synthetic controls and a relative extensive period to construct the counterfactuals. in particular, we restrict the analysis up to the 58th epidemiological day. this figure accommodates at least ten control states to build the synthetic controls, 9 at the same time it maximizes the out-of-sample days to run the counterfactuals. in this sense, our analysis concerns the very short-run impact of lockdowns, up to nearly three weeks. the treated states are marked in blue in table 1 , and include twenty states: alabama, colorado, florida, georgia, kansas, kentucky, maine, maryland, mississippi, missouri, nevada, new hampshire, new york, north carolina, oregon, pennsylvania, rhode island, south carolina, tennessee, and texas. the potential control states are marked in red, and include ten states: arizona, arkansas, california, illinois, iowa, massachusetts, nebraska, north dakota, south dakota, and washington. nonetheless, due to the lack of variation within the in-sample period, we exclude four states from this control pool as we explain below. importantly, oklahoma, utah and wyoming only implemented partial lockdowns (not reported in the table). therefore, they are hard to classify as either treated or control states. we opt to exclude them from the analysis. figure 1 illustrates the empirical strategy, which is formalized in the next subsection. it plots the evolution of (log) cumulative cases along the epidemiological time. the first vertical dashed line represents the tenth day after the first confirmed case. the in-sample period is represented in between the first and second vertical dashed lines, which mark the tenth day and the following twenty days, respectively. similarly, the out-of-sample period is in between the second and third vertical dashed lines, which mark the 31th and 58th epidemiological day, respectively. blue lines represent the treated states, whereas the red ones the potential control states. the turning points from blue full-to dashed-lines represent the days lockdowns were implemented (plus ten days) in treated states. note that new york is clearly an outlier among the treated states, exhibiting a huge amount of cases (more on that below). we use the red lines to build synthetic controls for each full blue-line up to the turning point, and then construct counterfactuals by simulating the synthetic controls forward up to the 58th day. the idea is to compare them with the blue dashed-lines that capture actual cases. as figure 1 highlights, some states display lack of variation within the in-sample period. just to give an example, washington had had only one confirmed case for the first 36 days since its first confirmed covid-19 infection. hence, we exclude it from the control group. for similar reasons, we also exclude arizona, illinois, and massachusetts from the control pool. the analysis ended up relying on six control states. we propose a two-step approach using the artificial counterfactual (arco) method introduced by carvalho et al. [2018] with the correction of masini and medeiros [2019] to estimate the number of cases for each us state. let = 10, 11, . . . , 58 represents the number of days after the first confirmed case of covid-19 in a given state. define as the natural logarithm of the number of confirmed cases days after the first case of the disease in this specific treated state, and as a vector containing the logarithm of the number of reported cases for control states also days after the first case has been reported as well as a logarithmic trend: log( ). the inclusion of the trend is important to capture the shape of the curve. the model is estimated as follows. we use the weighted least absolute and shrinkage operator (wlasso) as described in masini and medeiros [2019] to select the control states that will be used to estimate our counterfactual. the goal of the lasso is to balance the trade-off between bias and variance and is an useful tool to select the relevant peers in an environment with very few data points:̂︀ = arg min where = | , |, = 1, . . . , − 1, and = 1. is, for each state, the number of days from the first reported case until the lockdown plus ten extra days, and > 0 is the penalty parameter which is selected by the bayesian information criterion (bic), in accordance with medeiros and mendes [2016] . the weight correction in the lasso is necessary in order to control for the nonstationarity of the data; see masini and medeiros [2019] for a detailed discussion. the counterfactual for = + 1, . . . , is computed aŝ︀ we also report 95% confidence intervals based on the resampling procedure proposed in masini and medeiros [2019] . we are interested in examining the effects of lockdown policies not only on the number of cases, but also on the number of deaths. however, we cannot implement the strategy described above because there is not enough variation in deaths for the in-sample period. some states, for instance, implemented a state-wide lockdown policies before the first confirmed death. thus, we propose an alternative method. we consider a counterfactual state for the number of deaths based on the counterfactual estimated for the number of cases. this is not straightforward as in the traditional synthetic control method because the arco methodology described above includes an intercept in the estimation, which is measured in the log of the number of cases, and not only a convex combination of other states. intuitively, the methodology described above chooses a combination of states that is at a fixed distance from the treated unit at the in-sample period and not a convex combination of states that matches exactly the actual number of cases. the intercept controls for all time-invariant characteristics that define the counterfactual. then, we proceed as follows. let be the number of accumulated deaths in state at the day . also, let be the vector of estimated coefficients for the state as in (1) and used to construct the counterfactual for cases. in addition, let be a vector of the number of deaths for all states in the control pool at time . we define the counterfactual number of deaths in that state as: = −¯+¯ (3) where¯is the day that state implemented the lockdown policies. that is, we maintain the weights estimated above and adjust the intercept so that the counterfactual series for deaths matches the number of actual observed deaths in the beginning of the quarantine. for the sake of exposition, we relegate the results on cumulative deaths to appendix a.4. 10 to illustrate how the method works, figure 2 presents the arco counterfactuals for the states of alabama, colorado, and maine. the timing of the policy intervention ( 0 + 10) corresponds to the lockdown date plus ten days. the counterfactual analysis makes it clear the importance of lockdown policies in mitigating the acceleration of the number of covid-19 confirmed cases in the treated states. as shown in figure 2a , for example, our results point to a substantial increase in the number of cases in alabama if it had not adopted an early lockdown. similarly, figures 2b and 2c reveal the same behavior for the cumulative curves in the other selected states. counterfactuals are constructed with the estimated weights and cumulative cases of the six states that compose the control group. these weights are reported in table a .2 in appendix a.2. in appendix a.3, we present similar counterfactual plots for the remaining treated states. in order to assure that the proposed methodology is producing proper counterfactual analysis, we generate placebo results by producing a "synthetic control" for each control state using the remaining control states as control pool. results are displayed in figure 3 , which shows the ratio of the estimated counterfactual cumulative cases to the actual ones for treated states except new york (black lines), and non-treated states (red lines). we assume that the epidemiological day of the placebo intervention is 0 = 36, marked by the vertical dashed line, which is the median (and the mean) timing of the policy interventions in the treated states. it is reassuring that for half of the placebo counterfactuals the ratios fluctuate around one, whereas for the majority of treated states ratios grew above one at some point (likely around the actual timing of policy intervention). the latter result means that lockdown policies were effective to tame the spread of the virus, whereas the former suggests that results are not driven by chance. regarding south dakota, the only placebo counterfactual that reached a ratio well above one, by using google mobility data (described in appendix a.5), we show that mobility in residential areas increased whereas mobility in outdoor areas decreased substantially once compared to the period before the pandemic (see figures a.41 and a.42 in appendix a.5) . this is suggestive that south dakota's population endogenously decided to stay more at home, and avoided environments prone to the risk of contamination. at the time, a proper lockdown policy was not necessary, and south dakota's non-conformity to the placebo test does not seem to invalidate our approach. in contrast, for nebraska and california, the counterfactuals are pointing to a smaller number of cases than the actual ones, which goes against finding that lockdowns were effective to reduce cases of covid-19. the case of california is quite emblematic, as the number of cases during the estimation window remained very small and with very low variation. however, the number of cases started to grow at a fast rate much after the cut-off date. the state of nebraska displays a similar pattern. to gauge the quantitative impact of lockdown policies, for each state, whether treated or control used as placebo, we compute the ratio of the counterfactual estimated comulative cases ("without" a lockdown strategy in place) to actual ones on the 58th epidemiological day, which is the last day used to compute the counterfactual. table 2 reports the mean and median of the ratios across states, whereas table a .3 in appendix a.2 reports these ratios for each state. the first row corresponds the case in which controls are used as placebos, whereas the second considers the treated states only. as we discuss below, new york is clearly an outlier, whose ratio reached an implausible value of 16.5 as reported in table a .3. hence, our preferred specification is displayed in the third row which excludes new york from the pool of treated states. we also compute other two versions of these ratios using the lower bound (lb) and upper bound (up) of the 95% confidence interval in the numerator. the ratios are clearly above one for the treated units, whether new york is excluded or not. according to our preferred specification, counterfactual estimates suggest that the number of cases would be nearly two times larger were lockdown policies absent. again, it is reassuring that among the controls used as placebo, these average ratios remain around one. regarding the effects of lockdowns on cumulative deaths, we present the results for all treated states in appendix a.4. for some states, the counterfactual cumulative deaths exhibit similar patterns to those regarding cumulative cases. but, for many other states, they are not statistically significant at least for the first days after the policy implementation. one possible explanation is that there is a delay between cases and deaths, as the latter is a consequence of the former. hence, deaths only show up in the official statistics days after cases. perhaps, if we could estimate counterfactuals for longer periods, the synthetic accumulated deaths would further decouple from the actual ones. in addition, since weights on the controls are estimated considering the (log) cumulative number of cases, the counterfactuals for cumulative deaths are arguably noisier. 11 as discussed above and presented in table a .3 in appendix a.2, we obtain an implausible ratio (of counterfactuals to actual cumulative cases) of 16.5 to new york. this section puts a lens on this state. in particular, figure 4 displays the estimated cumulative number of cases for new york "without" lockdown, as well as extrapolations of the cumulative number of cases based on the mean and median growth rate of the last ten days of the in-sample period. as reported in table 1 , among the treated states, new york was the fastest one to react to the pandemic, and established a state-wide lockdown policy only 20 days after the first case. figure 4 extrapolates the last in-sample observations by using both the observed mean and median growth rates for the last ten days, which yields a similar pattern to the result obtained by applying the synthetic control approach. due to the progression of the virus, particularly in new york city, the in-sample observed rates are quite high once compared to other states as illustrated in figure 1 , which can be explained not only by the dynamics of the city but also by its high population density. hence, new york is clearly an outlier and might not be amenable to our synthetic control approach, which justifies reporting results excluding new york above. in this paper, as opposed to most of the early and incipient literature on the lockdown effects during the covid-19 crisis, we conisder a purely data-driven approach to assess the impact of lockdowns on the short-run evolution of the number of cases and deaths in some us states. also, as opposed to some recent papers that use a difference-in-difference approach, we adopt a variant of the synthetic control approach, arco, due to carvalho et al. [2018] and masini and medeiros [2019] . on average, according to the synthetic controls, the counterfactual accumulated number of cases would be two times larger were lockdown policies not implemented in treated states. in the first two columns of table a .1 we show the date of the first confirmed case in every treated state we analyze and its reopen date (plus ten days), whenever available at the time we started to circulate this paper. 12 in the third column, we show the difference (in days) from the first confirmed case and the reopen date plus ten days. these figures illustrate why we had to limit our sample size to only 58 epidemiological days. for example, if we had used 60 days in our analysis, we would have to exclude alabama and maine from our treated states, given that they would not be in a state-wide lockdown in the last days of the out-of-sample period. we report in the first seven rows of table a .2 the coefficients estimated by the lasso model for each treated state for the 0 +10 in-sample period. the last two rows display the mean and the median (across the out-of-sample period) of the ratio between the actual cumulative cases and the counterfactual cases for every state. with only two exceptions (missouri and nevada), every state has an out-of-sample mean and median of the observed-to-predicted ratio below one. this means that, on average, the realized cumulative cases were smaller than the counterfactual, which highlight that lockdowns had a meaningful impact on slowing down the covid-19 spread in these states. table a .3 reports the ratio of the counterfactual cumulative cases to the actual ones on the 58th day after the first confirmed case in each state. it also reports the lower and upper limits of the 95% confidence interval. among the 20 treated states, the ratio is larger than one in 18 of them. for missouri and nevada, there is no evidence on the effectiveness of lockdown policies. for mississipi and south carolina, the impacts of lockdowns are only modest. note that new york is clearly an outlier, with such ratio around 16.5. we discuss this case in the main text. in contrast, among the non-treated states, we obtain ratios close to one for three out of six cases. we assume that the cut-off of the placebo intervention is 0 = 36, which is the median (and the mean) timing of the policy interventions in the treated states. as discussed in the main text, south dakota, which displays a ratio well above one, experienced a large reduction in outside mobility even without official lockdown measures. california and nebraska, which display ratios below one, had very few covid-19 confirmed cases during the period before the cut-off. in this section, in figures a.21 -a.40, we report the counterfactual estimates for cumulative deaths based on the methodology described in section 3.3. as we discuss in the main text, although for some states, the counterfactuals exhibit similar shapes to those regarding cumulative cases, for many other states, they are not statistically significant at least for the first days after the policy implementation. we know that lockdowns affect the covid-19 dynamics by imposing social distancing and mobility restrictions. to help understand the results described in this paper, we analyze the mobility data available at google mobility reports (https://www.google.com/covid19/mobility/). google mobility data show how visits and length of stay at different places change compared to a baseline, before the outbreak of the pandemic. in particular, the baseline is the median value, for the corresponding day of the week, during the five weeks between january 3rd and february 6th 2020. in order to understand how the population in each group (treated and control states) is behaving during the covid-19 crisis, we compute the median of mobility changes across our sample period, i.e. the 48 days following the tenth day after the first confirmed case in each state. also, the data concern mobility changes for six categories, being five of them related to outdoor activities. namely, grocery & pharmacy, transit stations, parks, retail & recreation, and workplaces. the remaining one concerns indoor activities, namely, residential. hence, to capture an idea of outdoor mobility changes, we aggregate the aforementioned five categories into a single one defined as the median of the original five categories. in contrast, mobility changes in residential areas capture indoor mobility changes. the two boxplots in figures a.41 and a.42 present the median of mobility changes in all analyzed states both in residential and in outdoor areas, respectively. we report results for treated and control states separately. regarding mobility changes in residential areas, on average, residents from every state analyzed spent more time in these areas after the pandemic outbreak. however, those from treated states spent even more time indoor. nevertheless, there are outliers. for instance, residents from south dakota spent a lot more time in residential areas than before the pandemic, which helps understand the results found for this state in the placebo test. we found similar results for mobility changes in outdoor areas. clearly, residents from treated states remained in outside areas less often than residents from controls (always compared to the period before the pandemic). in new york, for example, there was a 50% decrease of outdoor mobility. once more, south dakota is an outlier for the control group, reinforcing the thesis that its population voluntarily decided to stay more at home. indeed, residents from south dakota spent almost 20% less time in outside areas, while those from the median state for the control group spent nearly 8% less. the economic costs of conflict: a case study of the basque country synthetic control methods for comparative case studies: estimating the effect of california's tobacco control program pandemic, shutdown and consumer spending: lessons from scandinavian policy responses to covid. working paper, university of copenhagen a simple planning problem for covid-19 lockdown how deadly is covid-19? understanding the difficulties with estimation of its fatality rate what will be the economic impact of covid-19 in the us? rough estimates of disease scenarios an seir infectious disease model with testing and conditional quarantine covid-19 infection externalities: trading off lives vs. livelihoods. working paper 27009 the covid-19 pandemic: government vs. community action across the united states. working paper arco: an artificial counterfactual approach for high-dimensional panel time-series data when do shelter-in-place orders fight covid-19 best? policy heterogeneity across states and adoption time the macroeconomics of epidemics human mobility restrictions and the spread of the novel coronavirus (2019-ncov) in china the effect of stay-at-home orders on covid-19 cases and fatalities in the united states. working paper did california's shelter-in-place order work? early coronavirus-related public health effects costs and benefits of closing businesses in a pandemic optimal mitigation policies in a pandemic: social distancing and working from home the effect of social distancing measures on intensive care occupancy: evidence on covid-19 in scandinavia. working paper counterfactual analysis with artificial controls: inference, high dimensions and nonstationarity ℓ 1 -regularization of high-dimensional time-series models with nongaussian and heteroskedastic innovations short-term covid-19 forecast for latecomers. working paper villas-boas. are we #stayinghome to flatten the curve? working paper key: cord-030870-ao5p3ra3 authors: paul, suman; bhattacharya, subhasis; mandal, buddhadev; haldar, subrata; mandal, somnath; kundu, sanjit; biswas, anupam title: dynamics and risk assessment of sars-cov-2 in urban areas: a geographical assessment on kolkata municipal corporation, india date: 2020-08-25 journal: spat doi: 10.1007/s41324-020-00354-6 sha: doc_id: 30870 cord_uid: ao5p3ra3 sars-cov-2 has been transmitted and outbreak took place in india during the last week before nationwide 1st lockdown took place. urban areas found more vulnerable and reported nearly 65% of cases during every phase of lockdown. mumbai, among four metropolitan cities found huge number of containment zones with nearly 30% of sars-cov-2 cases indicating clustering of cases. most of the containment zones of sars-cov-2 cases in kolkata municipal corporation found a significant relation with slum areas. the study primarily tries considering the nature of sars-cov-2 cases in different urban centres with the help of cartographic techniques. ahp method has been used to determine the factors responsible for such concentration of sars-cov-2 cases with vulnerability assessment (exposure, sensitivity and resilience) and risks. before nationwide lockdown starts, the share of urban centres found 25% which has been transformed into nearly 60% at the end of 3(rd) phase of lockdown. growth rate of sars-cov-2 cases found very high for chennai and thane with less number of doubling time to nation. slum concentration and containment density shows a higher degree of correlation in kolkata municipal corporation. risk map also shows the concentration of cases in central and north kolkata with higher degree of diseases exposure and sensitivity. control measures must be taken by the central and state government to minimise the transmission rate of sars-cov-2 mainly urban areas. as urban area contributing a higher share of sars-cov-2 cases, a proper management plan must be enforce. electronic supplementary material: the online version of this article (10.1007/s41324-020-00354-6) contains supplementary material, which is available to authorized users. in the present day context urbanisation becomes a major driver of demographic change of an area. according to united nations report, world's population living in the urban areas has grown from 43 to 55% during 1990 to 2015 [1, 2] . by 2050, it has been estimated that the world's 70% population will be reside in urban areas. this kind of urban growth and population concentration leads to sprawling and shanty development outside and within the city respectively. high population density, low per capita spacing, concentration of urban poor make significant impact on the epidemiology of the infectious diseases electronic supplementary material the online version of this article (https://doi.org/10.1007/s41324-020-00354-6) contains supplementary material, which is available to authorized users. [3, 4] . association between urban poor and risk of pathogen transmission is very high in this scenario. high human to human propagation can easily be spike with such vulnerable condition [5] . presently, more than 900 million populations in the world are living in slums whereas the figure of india is nearly 104 million [6] . in 1990, worldwide figure of slum populations were nearly 689 million which has been booming to 792 million in next 10 years i.e. in 2000 [7] . in the current worldwide pandemic situation of sars-cov-2, growth pattern, transmission nature and driving factors are the key aspects need to study. december 31, 2019 an outbreak of covid-19 (as known earlier) has been reported from wuhan city of hubei province in mainland china and rapidly spread into the other provinces of china along with 24 countries within end of january, 2020 [8] . wuhan city has been under full lockdown (travel ban and closure of everything except essential services) from 23 january 2020 [9] . but the decision has been taken by the government was too late as the by novel coronavirus (ncov-19) infections has already transmitted in different parts of mainland china and also in the different countries of the world. this episode is highly correlated with the chinese great migration during the january-february when near about 415 million people are moving towards mainland china (within the country and from outside the country) to celebrate their lunar new year. from mid-february countries of european continent, u.s.a, australia facing a terrible spike of sars-cov-2 cases which has been not affected india at a large till the end of march, 2020. nearly 4.9 million populations has been affected by sars-cov-2 and the fatality has reached into 0.32 million (as on 19-05-2020). several countries like united states, russia, brazil, italy, france, united kingdom, germany, turkey, iran has faced a major setback due the this pandemic [10, 11] . india has also reached the mark of 100,000 confirmed sars-cov-2 cases after 64 days of first case found. though the time taken by india to reach one lack infected cases much higher than the other countries, but the exponential triggering has been noticed during last week of 3 rd phase of nationwide lockdown (from 4 may, 2020 to 17 may, 2020). nearly 85% cases are reported from major cities of india and most interestingly, mumbai, delhi, ahmedabad, chennai, thane, pune, kolkata become the most contributing urban centres to sars-cov-2 cases (as on 19 may, 2020). high population density and higher concentration of slum population make an interruption for maintaining the social distancing and lockdown effectively [12] [13] [14] . considering such backdrop the nature of spreading of sars-cov-2 cases in the indian cities need to be analysed. further an attempt has also been made to quantify and assess the hotspot zones along with risks of the concentrated areas of kolkata (one of the metro city) for proper understanding of transmission of diseases in the congested and unhealthy places as a case study [9, 15, 16] . india has only 3 cases of sars-cov-2 up to 3 march 2020, but all cases has transmitted and grow in a slow but steady in different areas of india especially in urban centres. except kasaragode, a rural base area in kerala, urban areas of maharashtra, delhi, gujarat and rajasthan have shown a large number of sars-cov-2 positive cases. in this regard, the mumbai, ahmedabad, chennai, pune, thane, indore, delhi, jaipur, kolkata, surat urban centres have been considered for initial analysis as these urban centres contribute more than 60% of sars-cov-2 cases during nationwide lockdown periods. from the analysis of urban centres contribution of sars-cov-2 cases, four megacities of india (i.e. mumbai, ncr delhi, kolkata and chennai) has been further chosen ( fig. 1 ) to find out the nature of relationship between containment zones and sars-cov-2 cases. as urban centres with high population density and high concentration of slum population faced a risk of rapid transmission of sars-cov-2, a risk analysis have also been assessed on kolkata municipal corporation for the better understanding of driving factors of transmission of sars-cov-2. the sars-cov-2 data set has been obtained from indian council for medical research (icmr) and health website which provide the real time data set on the outbreak of this pandemic. another good source of data has been found from 'how india lives'. this website provides different health infrastructure dataset at district level and uploading the real time covid-19 cases for each day at district and city level. containment zones cities which is a very important source of information to identify the sars-cov-2 hotspot has been taken from health bulletin, govt. of west bengal. 10 major cities of india are taken for primary level study with duration of before lockdown situation to present day (12-05-2020) scenario. the slum data set has been taken from unpublished baseline survey report of 2016 by the department of bustee services, kolkata municipal corporation. as most of the dataset of slum related indicators found from census of india, 2011 dataset, we have to take baseline survey report of kmc. this data base has helped to identify the nature of exposure, sensitivity and resilience of world wise slam households which can assess the nature of risk among the slums. to depict the nature of sars-cov-2 spatial association, local moran's i statistic has been used to identify the cluster and spatial outlier in the neighborhood for 12-05-2020 dataset [16] [17] [18] [19] . moran's i highlights the location based cluster form high to low infection and calculated as follows: where zi the number of sars-cov-2 containment zones at a spatial unit i (ward as an areal unit), z is the overall containment zones in the study area (kmc as a whole) and n is the number of spatial unit which are 144 (no. of wards in kmc) and v is the rate of the variance of sars-cov-2 containment zones in different wards which is computed as below: this method represents high (positive) and low (negative) values. high-high cluster values show the results cluster up of similar values with q-value range from 0.01 to 0.1 (0.1 for 90%, 0.05 for 95% and 0.01 for 99% of confidence level) and low-low cluster shows the clustering of dissimilar values with same q-values. higher the q-values with lower the z-score value shows the perfect significance of the method applied for study the nature of clustering pattern. the p value is a value of probability. for the pattern study, it is the probability which create spatial pattern using some random process. the small p-value value suggests that the observed spatial pattern is an outcome of random process; hence null hypothesis can be rejected. analytical hierarchical process has been developed by satty [20] to facilitate priority setting and decision making. ahp now broadly applied in social science research and specifically in hazard and risk analysis. in this method a pairwise matrix has been developed among the set of scale of choices (table 1 ) on the given alternatives [21] . ahp methods also deliver to judge the nature of consistency of preferences given by the report using consistency ratio when the value has 0.10(cr c 0.10). the consistency ratio is defined as: where cr is consistency ratio, ri represent random number (table) and ci represent consistency ratio is expressed by for the present work, consistency ratio has been found as 0.07, consistency index as 0.04 and the value of random number for n = 3 has been determined 0.58. based on socio-economic data of slum of kolkata municipal corporation and containment zone data and containment zone data from different web sources we have selected the following indicators for quantity exposure, sensitivity and resilience for assessing the risk [22] infector disease like sars-cov-2 (see table 1 ). the study considered the following as exposure indicators, number of sars-cov-2 containment zones/ 0 0000 population (e 1 ), percentage share of slum population to total world population (e 2 ), number of containment zones sq.km (e 3 ), number of slum located in the ward (e 4 ), and percentage slum area to total area of the ward (e 5 ). the study considered the following as sensitivity indicators, hhs size (s 1 ), percentage of hhs size with 5 persons and above (s 2 ), no. of persons in the slum, used community toilet, no. of persons/tube well used (s 3 ), population density (s 4 ) and household density (s 5 ). the study considered the following as resilience indicators, percentage of hhs access to drinking water facility within premise (r 1 ), percentage of hhs access drinking water from treated source (r 2 ), percentage of literate population (r 3 ), average per capita income of the slum located in the ward (r 4 ) and work participation rate (r 5 ). number of containment zones/ 0 0000 population in one of the most important indicators affecting the exposure of sars-cov-2 cases. according to icmr, the sars-cov-2 cases are taken the epicenter of this containment zone [23] . on the basis of data, we have divided the whole data set into 5 categories having high risk factor to low risk factor (above 2.0/ 0 0000 population, 1.51-2.0/ 0 0000 population 1.0-1.5/ 0 0000 population, 0.5-1.0/ 0 0000 population and below 0.5/ 0 0000 population). share of slum population is another prime indicator to explain the exposure of such infections dieses h2h pattern. so the shanty and confected household are very much exposed to such dieses. here, we also divide the whole set of data into 5 categories on the basis of risk factor. density of containment zones in another good indicator for assessing the nature of exposure. as the density is increasing, the exposure of sars-cov-2 to other persons source: computed by the authors becomes very high. slum area to total area is another crucial factor as areal coverage increase, the congestion pattern of living, unhygienic situation are very much exposed. thus higher the percentage signifies higher risk factor. each of the indicators has been categories under five classes of risk factor and weightage of these indicators have been assigned using ahp method. the exposure index of sar-cov-12 has been determined using following formula. exposure index ei ð þ : where e 1c denote the ward data lies or which class [class [1] [2] [3] [4] [5] as mentioned in table 2 and e 1w is weightage assessed using ahp model. here, in this study purpose identifies the factors which can trigger the intensity and probability of spiking up of sars-cov-12 cases. household size undoubtedly increases the extent of severity of such h2h infectious disease. population belonging in a household also led the situation more badly. when social distancing is addressed nationwide, the poor people of the slums cannot maintain such as due to shortage of space in the households. community toilet and tube well use is also promoting the chances of mass gathering in the slams. many people are living under such shanty places and also depend upon these facilities which can aggravate and spread such infectious disease in the community level. sensitivity index can be expressed in the following manner: where s 1c denote the ward data lies or which class [class [1] [2] [3] [4] [5] and s 1w is weightage assessed using ahp model. resilience can be defined as reduction and prevention approach to risk any vulnerability for making an area more socio-economically stronghold [24] . drinking water facility with premises and from treated source, level of literate population are the important indicators for resilience study. on the other hand per capita income and work participation rate is the potential indicator to increase the resilience of the any households. resilience index can be determined as: where r 1c denote the ward data lies or which class [class [1] [2] [3] [4] [5] and r 1w is weightage assessed using ahp model. on the basis of assessed results from exposure index (ei), sensitivity index (si) and resilience index (ri), risk of the selected wards have been estimated using following methodology as prescribed by ipcc [22] framework: where ei w is the exposure index, w e weightage of exposure, si w is the sensitivity index, w s weightage of sensitivity, ri w is the resilience index, w r weightage of resilience. 3 results and discussion outbreak of novel corona virus (earlier it is termed as 2019-ncov and later renamed as sars-cov-2 during the preparation of this manuscript) leading to lockdown (which means entire closure of all services except frontline services and essential services, i.e., banking, fire service etc.) of entire country which took place on and from 24th april, 2020 midnight. till date of the preparation of this manuscript, india already gone through three phases of lockdown (which will end on 17th may, 2020) and during these phases, urban centres have been found the most threating situation contributing nearly 60% of total affected cases of sars-cov-2 from 10 most popular cities of india. nearly it was may 6, 2020 when india experienced 52,469 confirmed sars-cov-2 cases with 1771 death and entered into the list of top fifteen countries of the world. it is only 1.41% sars-cov-2 cases to rest of the world but it was only 0.13% (536 cases) when nationwide lockdown started. the surge of sars-cov-2 cases as seen by usa, italy, france, germany, brazil, russia was not same in india till 19 may, 2020. cases are found to be doubled in every 11 days at that time. but after 20 may, 2020 a large spike has been found till 2 june, 2020 which put india in the same bracket as brazil and russia in terms of upward trend in the infected cases and fatalities. the fact that despite of four lockdown imposed by central and state governments combine, the stringent index has been found to fall from 100 (on 24-03-2020) to 79.2 (26-05-2020) which make a straightway relation of surging the sars-cov-2 cases in the different parts of india (fig. 2) . another issue is the return back of stranded labour from mainly southern and western states to the eastern portion make this spread to rural areas also. based on the dataset and fig. 3 of sars-cov-2 of 10 major urban centres, mumbai has been found most cases with one-fifth (19.35%) to total country's cases up to 12th may, 2020 when the lockdown starts on 24th march, 2020 it was only 6.65% of whole country with a rapid growth of 1.183, 1.085 and 1.049 during 1st phase, 2nd phase and 3rd phase of lockdown respectively. doubling rate of cases is 1.2 times (mumbai doubled the cases in 11.5 days as on 12.05.2020) to country doubling time (9.7 days). ncr delhi also contributes a larger extent of sars-cov-2 cases with 9.78% which was only 5.25% just before lockdown. a single event at nizamuddin marcus (a religions congregation) makes the situation worse during first phase of lockdown [26] . a sharp rise or cases (nearly 13.59% to country's total cases) has been experienced during this time which has been slow down gradually. stringent actions have been taken by the respective state governments which reflect in the higher rate of doubling time (11.8 days) to country's data. after ncr delhi, ahmedabad from state of gujarat originate as a big area of concern for the country which contributing 8.22% of total country's cases at the end lapse of 3rd phase of lockdown but it was only 2.45% at the starting of 1st phase of lockdown. during 2nd phase of lockdown a tremendous spike in the cases of sars-cov-2 has been experienced by this urban centre. during these phase, most of the cases (nearly 70%) are related to travel within the country and related to delhi's religious congregation took place is end of march, 2020 [27]. through having higher value of doubling rate (11.4 days to double) from country's perspective and having decreasing growth rate. number of containment zones gives a clear picture of clustering of cases which signifies the nature of community spreading which is a matter of uneasiness. chennai shows an alarming situation with 5.43% of cases during the end period of 3rd phase of lockdown which was just 1.05% on 24.03.2020 (just before the lockdown). though the growth rate is decreasing (from 1.17% at 1st phase of the lockdown and 1.108% at nearly end of 3rd phase of lockdown) but the doubling time of cases is 6.25 days which signifies a great risk. among all the 10 urban centres under study (which are combine contribute 56:28% of cases), chennai has the lowest doubling time which is a matter of concern. on 14th may 2020, nearly 2600 sars-cov-2 have been outlined to a wholesale vegetable market named koyambedu and authorities have acknowledged it as a coronavirus hotspot [28] . after cases being reported from the popular market of on the other hand kolkata shows a steady but consistency in growth and spreading of sars-cov-2 cases. almost in all the lockdown phases, kolkata shows much below transmission of sars-cov-2 and contributing least to country's tota. doubling rate gives a clear picture of this scenario with almost 12.2 days which is a good indication with a population of nearly 4.5 million and having nearly 26.17% of slum population lived in this region (census, 2011). pune (3.34%), thane (3.19%), indore (2.63%), jaipur (1.70%) and surat (1.26%) contribute nearly 12.06% of the cases to country's total. growth rates of sars-cov-2 in these urban centres are constantly decreasing which is a good pictogram of action taken by the government of those states as well as social awareness with following social distancing. among all the urban centres, thane shows an alarming situation in doubling rate which is 6.9 days and thus thane can be an emerging hot spot like mumbai and chennai if proper action not taken at the earliest. two days before the second phase of lockdown started, india has charted identification of red, orange and green zones which is a strategic approach for defining the area of operation, applying perimeter control, delineating containment and buffer zones. meanwhile ministry of health and family welfare, govt. of india declared 170 districts as 'hotspots' and 207 districts as 'non-hotspots'. ministry also categorises hotspots in two way-(a) clusters-increase in the incidence of sars-cov-2 with less than 15 cases and there must be epidemiologically linked and (b) large outbreak-when more than 15 cases have been found from a defined geographical area and these cases may not be epidemiologically linked. to combat with this pandemic, state governments have begun to experiment the idea of containment zones to deal with sars-cov-2. mechanism of the containment zones is very straight forward, clusters or large outbreak which shown rise of cases and shown rapid transmission either in family or in community must be seated. movement in these zones are very limited which is only for foot line workers and residential movement is completely ban. when large number of cases are found in a smaller number of containment area it may a reflection of large outbreak (ratio is very high) but when large amount of cases with larger number of containment area (the ratio is low) suggest the clustering of disease took place. in case of comparison of confirmed new cases on daily data initiated from the period of first lockdown for four cities is smoothen using five year weighted moving average method. the study considers the weights as defined by using the rule the weighted value of moving average can be calculated from p 5 i¼1 x i w i for the corresponding five values of the series. the study deals with the three types of lock down considering them as phase-i (consist of 21 days starting from 25.03.2020), phase-ii (consist of 19 days starting from 15.04.2020), and the phase-iii (consist of 9 days starting from 04.05.2020). the smoothing data set of new cases of ncovid-19 syndrome is depicted in fig. 4 . the nature of the new cases shows that situation of kolkata is relatively best in compare to other three cities [30] . whatever be the ways the explosion of new cases are found with some controlled behaviour and even with rare fluctuations. the situations of chennai showing some controlled behaviour up to 27.04.2020 i.e. in the middle of phase-iii lockdown but as the relaxation in the lockdown started the outbreak increases with huge rate. similar path also observable for delhi ncr like chennai, but it includes massive fluctuations. as time precedes the frequencies of fluctuations in case of delhi also increases. the situation of mumbai is worse among the all four cities. the periodic up and downs of the new cases puzzle the governance to control it. the linear trend line for cities shows that the line is steepest for mumbai, and it is more than double of delhi. four metro cities shows different pattern of transmission during end ward of 1st phase of lockdown to present time. delhi shows a higher ratio between number of sars-cov-2 cases and containment zones suggesting the larger outbreak for the region. on the other hand mumbai, chennai and kolkata show a clustering scenario of these cases during this time setting. higher number of cases with lower ratio also suggests the expansion of the sars-cov-2 cases in the new areas in faster rate. in 12 may 2020, mumbai and chennai show nearly same value (6.46 and 6.54 respectively) but with the sars-cov-2 cases nearly 3.5 times to chennai. mumbai city shows a huge spreading along different places in this period of time. during this period, delhi shows a much higher rate fluctuating from 35.63 to 42.47 (having a highest ratio of 72.05) and pinpointing the large out break as stated earlier. during this period, kolkata shows a steady pattern having a ratio of 1.9-2.64. the containment zones caused trouble to the citizens, by restricting the mobility almost entirely and have to depend on government officials and selected venders for maintaining the essential services. this methodology of curtailment of rights is temporary for the containing and stops the spreading of disease. but slums in the urban area cannot fully follow the thumb rules of such containment zone. as these people lives in a shanty, unhygienic environment, and using community toilet with large dependency on tube well for water accessibility are very much susceptible for these h2h transmission. mumbai and delhi shows the perfect example of such transmission. dharavi, the world famous slum has been hardly heated by this pandemic. kolkata face a challenge to combat with the spreading of sars-cov-2 cases in the densely slums concentrated areas [7] . most of the wards in kmc having more than 10 containment zones where a large percentage of population living in the slums. increasing number containment zones along with number of cases has proven the spatial dispersal of sars-cov-2 cases in kmc which is needed to be studied further. in kolkata municipal corporation figure 5 shows the time series pattern of the confirmed cases of sars-cov-2 from 1st case detected on 17th march, 2020 to 12th may, 2020 when cumulative number were 1068 and 2173 for kmc and state of west bengal respectively. as kolkata has experienced 1st case of sars-cov-2, here we have taken ward wise containment zone to find out the nature of hot spots located in the municipal area. as per icmr, containment zone confirms the epicentre of cases and due to unavailability of sars-cov-2 ward wise data; we used number of containment zones as proxy indicator [23] . figure 5 also confirms that, up to 1st week of april 2020 kolkata municipal corporation (cmc) contributes a huge percentage share to state's total which has been decreasing afterwards. this scenario again found from starting of fourth week of april when kolkata share nearly 70% of state's total. a huge population of nearly 45 lakh (census of india, 2011) with a high percentage share of slum population (31.35%) is very much vulnerable for transmitting this h2h virus where population density, slum density, using per capita community latrine and tube well are high [31] . figure 6 shows the hot spots for covid-19 using satscan on the basis of two different dates dataset on containment zones result which identifies two primary clusters and three secondary clusters with high confidence value (p-values are found less than 0.01) in kmc. the primary clusters are located covering (a) kareya, tiljala, topsia, tansra, survey park region and (b) jorabagan, burtola, girish park area. secondary clusters have been identified to designate more cases likely to aggregate from kmc which are extent from northern portion to south-western portion. these secondary clusters have also the significance values less than 0.05 (p-value). the highly decentralized nature of incidence of this disease clearly showed limited hotspots within the city of kolkata. ward no. 66 (tiljala/topsia area) has been found a large number of containment zones in a smaller area and it has increasing very sharp. this area has fallen the primary cluster of central kolkata when ward no. 18 and 26 (banstala, girish park region) are fallen under another primary cluster. the sars-cov-2 views caused tremendous pressure situation in the 97 wards in kolkata municipal corporation out of 144 wards (fig. 7a) . 227 containment zones have been found on those 97 wards on 27-04-2020 which has been increased into 338 numbers located in 124 wards. the distribution pattern of containment zones in two different dates has shown a clear increase in spatiality among the wards of kmc. the highest number of zones has been found in the wards located in central kolkata, east suburban and port area and some portion of north kolkata. huge numbers of slums (registered and unregistered) are found in those areas which may play a role of catalyst to spread and transmitted this pandemic [32]. highest concentrations of containment zones have been found in tiljala, kareya, beliaghata, phoolbgan, razabazar, tala, burtola, jorabagan, girish park, bowbazar, entally, muchipara, and survey park area. total number of containment zones in this area have been covered nearly 50% of the total containment zones (fig. 7b) . wards with high containment zones are falling in those wards which have a higher degree of slum population as well as the number. most of the areas under slums are very old and developed before independence [31] . old shanty dwelling slots with higher family size, higher dependency on community toilet/latrine with water availability source as tube well make the area more risky and vulnerable to this sars-cov-2 [33] . as it is already known to everyone that this infection can transmit h2h pattern and ro of 2.5, it is really need to assess the scenario of those slums for better approach to stop the transmission and break the chain of this infection. here need the assessment of the risk of those hotspot areas to make a proper evaluation of vulnerability and steps to be taken in coming days. municipal corporation (kmc) here where ew is weightage of exposure, ei w is the ward-wise exposure index, sw is weightage of sensitivity, si w is the ward-wise sensitivity index, rw is weightage of resilience and ri w is the ward-wise resilience index. on the resulted dataset, risk zones have been categorised into 5 classes from very high risk zone to very low risk zone. of the total selected wards under study (68), 6 are very highly exposed to sars-cov-2 followed by highly exposed wards (16), moderately exposed (22), low exposed (11) and very low exposed (9) categories. the exposure index is composed of sars-cov-2 containment zones density with population, share of slum population and density of slums with areal coverage to ward area made the wards located in north and central kolkata more exposed as these areas are very old and the slums in these wards have an age of more than 75 years. old drainage system, located behind the rail lines and nullas (old sewerage lines) make the situation more worsen. overall the area under gardenreach, metiaburuz (50% of very high exposed wards from this area) with a high percentage of muslim population face the higher degree of exposed of such infections (fig. 8a) . on the initial stage reluctant situation from government end and non-maintaining the social distancing norm during nation-wide lockdown period make this areas face a terrible trouble and have experienced a number of sars-cov-2 cases with a large number of containment zones. sensitivity index analysis showing that of the total wards very high sensitivity has been found among 17 wards followed by 16 wards in high sensitive index, moderately sensitivity found among 13 wards when 15 and 7 wards are found in low to very low sensitivity values respectively. high sensitive wards are mostly lies in the north kolkata and partly in central kolkata (fig. 8b) where population density, household density, latrine and tube well dependency among the slum dwellers are very high which clearly gives the results of very high sensitive zones. these zones settled are much before of independence which has a migration legacy (7, 34) . resilience capability among the selected wards of kmc found to be high as more than 64% of the total wards are characterised by very poor to moderate level of resilience. the wards are very much lagging behind in facilities which can protect them from such infections and the nature of infrastructural development are also found low in nature. figure 8c shows that the most vulnerable wards (high to very high) located in central and eastern portion of the study area. drinking water facilities, per-capita income and work participation rate are very low in these wards. eastern portion wards of the study area are joined with kmc much later during 2000 onwards which shows the less unavailability of facilities in respect to others (33, 19) . risk analysis revealed that very high vulnerability has been observed in 11 wards followed by high risk areas with 21 wards (nearly 30% to total wards under study). as a whole high to very high wards coved nearly 47% of the studied area (fig. 8d) this indicates a disquieting situation for such infectious disease. most of the wards located in central and some portion of north kolkata. four wards also most of the containment zones, high slum population share and density with excessive dependency on community toilet and tube well are the driving forces behind this high risk factor in these areas [34] [35] [36] . on the other hand low working population, low per-capita income, high household density in topsia, tiljala, gardenreach, rajabazar, beliaghata, burabazar, jorabagan area make more hazard prone. earlier evidences of dengue fever also found the same type of hotspots in the past years. unhygienic and close spacing of settlements, not maintaining social distancing and very low per-capita space availability (nearly 5-7 persons in an 80-100 sq. feet room) make the region hotspots and rightly most of the sars-cov-2 cases found from these places. the spatiality of sars-cov-2 has wide-ranging expressively from april to may 2020, and it has exhibited consistency in northern and central part of kmc. a grouping of irregular and epidemic patterns of human-to-human exposure has been observed during this period [12] . by contrast, the distribution curve for cities in india, mumbai has been experienced largest outbreak in india where kolkata has constrains its spike. purely temporal cluster analyses of sars-cov-2 infection illustrated significant clusters in april and may of 2020. this finding is consistent with previous studies of eifan [34] , which showed significant peaks in mers-cov incidence between march and may in saudi arabia in 2014. in this study, sars-cov-2 was observed during mass gatherings in different part of india, which are inconsistent with previous studies [35] . this indicates another knowledge gap regarding the mode of transmission that needs further investigation. though the transmission and outbreak has not a sudden one, major urban centres have been found more vulnerable to transmit this h2h virus. high population density, concentration of high amount of slum population with high household density with low per-capita income shows the main driving factors for such outbreak. the people of those places are compel to break the said social distancing, as the people have a little, very little space for stay in household (in some cases they lived in 80 sq. feet area with 6-7 persons), they used to go and use community toilet, tube well sharing with other hundreds of population make the situation more grave (3, 8, 9) . quick preparation and execution of the containment plan, deployment of adequate human resources (mainly from health workers) at ward level, active surveillance in the well-defined geographical area and higher test (rapid antibody test) can minimise the chance of transmission in community level. as the spacing of households is very congested, such actions must be taken without any interference. revision of world urbanization prospects urbanization and global environmental change: 21st century challenges middle east respiratory syndrome coronavirus: risk factors and determinants of primary, household, and nosocomial transmission. the lancet infectious diseases health and health-related indicators in slum, rural, and urban communities: a comparative analysis understanding the spatial diffusion process of severe acute respiratory syndrome in beijing abstract for slums regional disparities of slums, 2013-an overview with special emphasis to kolkata spatial epidemic dynamics of the covid-19 outbreak in china spread of yellow fever virus outbreak in angola and the democratic republic of the congo 2015-16: a modelling study. the lancet infectious diseases modes of transmission of virus causing covid-19: implications for ipc precaution recommendations a mathematical model for simulating the phase-based transmissibility of a novel coronavirus spatiotemporal clustering of middle east respiratory syndrome coronavirus (mers-cov) incidence in saudi arabia the influence of urbanization modes on the spatial circulation of flaviviruses within ouagadougou (burkina faso) spread of yellow fever virus outbreak in angola and the democratic republic of the congo 2015-16: a modelling study. the lancet infectious diseases finding malaria hot-spots in northern angola: the role of individual, household and environmental factors within a meso-endemic area beyond moran's i: testing for spatial dependence based on the spatial autoregressive model spatial clustering of plasmodium falciparum in bihar (india) from 2007 to 2015. spatial information research rapid urban malaria appraisal (ruma): epidemiology of urban malaria in ouagadougou identification of malaria hot spots for focused intervention in tribal state of india: a gis based approach the analytic hierarchy process: planning, priority setting, resources allocation urban flood hazard zoning in tucuman province argentina using gis and multicriteria decision analysis climate change 2014: impacts, adaptation, and vulnerability. part a: global and sectoral aspects. contribution of working group ii to the fifth assessment report of the intergovernmental panel on climate change containment plan for large outbreaks risk assessment of coastal erosion of karasu coast in black sea an impact evaluation study of bsup programme intervention in kolkata metropolitan area (kma), kolkata municipal development authority assessment of water security in socially excluded areas in kolkata, india: an approach focusing on water, sanitation and hygiene a pandemic risk assessment of middle-east respiratory syndrome coronavirus (mers-cov) in saudi arabia a systematic review of emerging respiratory viruses at the hajj and possible coinfection with streptococcus pneumoniae slums in kolkata: a socio-economic analysis publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations acknowledgements the authors acknowledge the department of bustee service, kmc, hogg buildings (3rd floor), kolkata for providing data support for the study.authors' contributions sp and sb designed the study; sh, ab and bm contributed to data acquisition; sk and sm carried out the statistical analysis; sp and sb drafted the manuscript. all authors contributed to the interpretation of data and revision of the manuscript. all authors read and approved the final manuscript.funding this work has not been supported by any state or central government funding agencies. key: cord-127109-jdizyzbl authors: bertschinger, nils title: visual explanation of country specific differences in covid-19 dynamics date: 2020-04-15 journal: nan doi: nan sha: doc_id: 127109 cord_uid: jdizyzbl this report provides a visual examination of covid-19 case and death data. in particular, it shows that country specific differences can too a large extend be explained by two easily interpreted parameters. namely, the delay between reported cases and deaths and the fraction of cases observed. furthermore, this allows to lower bound the actual total number of people already infected. the unfolding covid-19 pandemic requires timely and finessed actions. policy makers around the globe are hard pressed to balance mitigation measures such as social distancing and economic interests. while initial studies [3] predicted millions of potential deaths never findings hint at a much more modest outcome [8, 4] . especially the case fatality rate (cfr) and the number of unobserved infections are crucial to judge the state of the pandemic as well as the effectiveness of its mitigation. yet, there estimates are plagued with high uncertainties as exemplified in the quick revisions even from the same institution [3, 4] most studies are based on elaborate epidemic modeling either using stochastic or deterministic transmission dynamics. especially, the susceptible-infectedrecovered (sir) model [10] forms a basic building block and has been extended in several directions in order to understand the dynamics of the ongoing covid-19 pandemic [9, 2, 7, 13] . in this context, it has not only been compared with more phenomenological growth models [12] , e.g. logistic growth, but also been used to quantify the effectiveness of quarantine and social distancing [9, 2] . e.g. social distancing, can be easily included by replacing the infection rate parameter with a function allowing it to change over time. [2] assumes one or several (soft) step functions where the infection rate drops in response to different measures after these had been implemented. such detailed modeling is required in order to capture and forecast temporal dynamics of the epidemic spreading. yet, substantial care is needed as to which parameters can be learned from the data and which cannot. indeed, i show here that sir type models -and others exhibiting similarly flexible growth dynamics -are non-identified with respect to the cfr and the fraction of observed infections. instead, a direct visual exploration of the data leads to valuable insights in this regard. in particular, much of the variability relating reported case and death counts can be explained by two easily interpreted parameters. furthermore, based on three simple assumptions a lower bound on the number of actual infections, including observed and unobserved cases, can be obtained. in turn, confirming recent estimates without the need of complex and maybe questionable modeling choices. covid-19 data are published by several sources, most notably the john hopkins university and the european center for decease prevention and control (ecdc). here, data from ecdc as available from https://opendata.ecdc. europa.eu/covid19/casedistribution/csv are used. figure 1 shows the total cumulative case and death counts of selected countries. these countries are among the eight most effected countries in terms of absolute and relative deaths 1 . in the following, i will focus on relative counts as these are arguably more meaningful when comparing different countries -which could differ widely in terms of population size. assumption 1. death counts are more reliable than case counts. by assumption 1 analysis will start from relative cumulative death counts d t in the following 2 . furthermore, in order to facilitate country comparisons, dates are shifted relative to the first day that relative death counts exceed a threshold θ of 1, 2, 4 or 8 deaths per million inhabitants respectively, i.e. t = 0 is defined such that d t ≥ θ for t ≥ 0 and d t < θ for t < 0. figure 2 shows the resulting time course of relative case and death counts. aligning dates in this fashion shows that several countries exhibit similar time courses, e.g. belgium and spain or china and south korea. as shown in the supplementary figure s1 the remaining country specific differences can be explained by differences in growth rates. re-scaling time according to the estimated doubling time indeed leads to a data collapse as complete as often observed in physical systems exhibiting scaling laws [11] . here, these differences in the precise temporal dynamics of epidemic growth are not required. instead, the relation between relative death and case counts is considered. while relative death counts exhibit similar time courses the corresponding relative case counts c t are more variable when aligned in the same fashion, i.e. relative to the first day that d t exceeds a given threshold. as i will argue now, most of this variability can be explained with two readily interpretable parameters. there is a well defined country specific delay between reported cases and deaths. relative days since one death per mill. estimated cfrτ for varying delays τ ita figure 3 : estimated cfr cfr τ for germany (left) and italy (right) using different delays of τ = 0, . . . , 11 days. note that in each case, there exists a characteristic delay such that estimates are almost constant over time. further note that estimates for all delays will eventually converge to the same final value when enough data are available. figure 2 suggests that relative case counts are not aligned as some countries, e.g. germany, systematically lead the counts reported in other countries, e.g. italy. such a difference could mean that individuals survive longer, e.g. due to differences in medical care, until they eventually. it could also just reflect reporting delays due to bureaucratic reasons. in any case, it is clearly the case that individuals die not immediately, but some days after they had been tested positive previously. this delay also needs to be taken into account when estimating the case fatality rate (cfr). commonly the cfr is defined as cfr = dt ct . not surprisingly this estimate is highly variable and changes systematically over time, especially at the beginning of an epidemic. the observation captured in assumption 2 also explains the surprisingly low cfrs initially announced in austria and germany where reported death counts are simply some days older compared to other countries! thus, taking into account that individuals that had been tested positive will usually not die on the same day but after some delay τ (if at all), i define i.e. comparing current death with previous case counts. figure 3 shows the cfrs estimated for germany and italy in this fashion, i.e. for different delays τ . the estimate using τ = 0 rises over time simply reflecting that due to the reporting delay death counts have not yet caught up with the exponentially growing case counts. interestingly, for each country there exists a characteristic delay at which the estimated cfrs are essentially constant. thus, reflecting the hypothesized delay between reported cases and deaths. this delay can either be estimated by visual inspection or by fitting a linear model on each delay and picking the one with minimal absolute slope 3 . figure 3 shows the delays τ and corresponding cfrs cfr τ , i.e. the median cfr value at this delay, estimated for each country in this fashion. in order to fully relate the observed case with death counts an additional, and stronger, assumption is needed. assumption 3. the true case fatality rate is the same for all countries. while assumption 3 ignores medical, demographic and other differences between countries, i believe it unlikely that the cfr is very different across different countries. in the end, its the same type of virus spreading in all countries. this suggests that differences in estimated cfrs simply reflect differences in the ability of countries to actually observe all infected individuals, i.e. due to more or less effective tracking and testing procedures. to illustrate this effect, a true cfr of 1% is assumed in the following. this is consistent with current knowledge and had also been used in other studies [4] . just from the estimated values any cfr below the minimum of all estimates (about 2% found for austria and south korea) and above 0.1% (which would imply an observed fraction above one for belgium) is compatible with the data. figure 4 shows the country specific estimates of reporting delay, cfr and fraction of observed cases (assuming a true cfr of 1%) obtained in this fashion. in turn, figure 5 shows the implied relative case counts when shifted by the estimated delays and scaled to reflect the unobserved fraction of cases for each country. notably, these implied counts all align nearly as good as the death counts in figure 2 (right panel) even though the initial threshold was based on the deaths counts alone. the supplementary figure s2 shows that this holds also when re-scaling time according to the growth rate of deaths. overall, the collapse of implied case dynamics convincingly illustrates that the relation between case and death counts is fully and reliably captured by two parameters -compatible with three reasonable assumptions. in reality, an additional delay between an infection and its corresponding positive test result can be assumed. therefore, the fraction of observed cases will be even lower than obtained by the analysis above. unfortunately, assuming a sufficiently flexible model for the growth of the actual cases already the cfr and the fraction of observed cases, let alone an additional delay, are not jointly identifiable. the basic sir model [10] , assumes that an infection unfolds when susceptible (s) individuals become infected (i) -which in turn infect further susceptible individuals. finally, infected individuals recover (r) (or die) and are no longer susceptible. in continuous time, the dynamics can be described by the following system of ordinary differential equations (odes): where n ≡ s t + i t + r t is constant over time. model parameters are • the infection rate β • and the recovery rate γ. in this model, the average time of infection is γ −1 giving rise to a basic reproduction number of r 0 = βγ −1 . sir models and extensions are widely used in epidemic modeling. the have also been applied to the understand the dynamics of the ongoing covid-19 pandemic [9, 2, 7, 13] . in particular, models including the possibility of unobserved cases or including a reporting delay have been developed. within the sir framework, both effects can be included in several ways, most easily by assuming that observed cumulative infections are simply a fraction α ∈ [0, 1] of previous total infections i t + r t , i.e. α(i t−τ + r t−τ ). a more elaborate attempt instead considers more detailed dynamics of the form where a fraction α of infected individuals i t is observed (o t ) after an initial delay 1 γ i . in any case, whether observed or not, individuals recover (or die) after an additional delay. in general, the infection rates β i , β o , β u could be different for initial infections and observed vs unobserved cases 4 . in addition, mitigation measures, e.g. social distancing, can be easily included by assuming that β's are functions of time. e.g. αγ i i t . now assume a second model with α = 1 > α which nevertheless exhibits the same dynamics with an additional time shift τ . by using a time varying β (t) such that we obtain exactly the same number of observed cases, i.e. o t−τ = o t . note that as α > α, we have that s t < s t−τ and s t is a sigmoidal function of time due to the sir dynamics. furthermore, when the population is large, i.e. n 1 and s 0 ≈ n the resulting β (t) is mostly driven by the drop in s t+τ as compared to the much smaller change in s t . indeed, figure 6 shows the dynamics of the above model with β = 0.3, γ i = γ r = 2 10 5 , α = 0.1 starting from (n = 10 8 , 1, 0, 0, 0). in turn, assuming α = 1 and τ = 5, the time varying infectivity β (t) is approximated by the best-fitting logistic sigmoid of the form β 1 + (β 2 − β 1 )σ( t−τ t ). note that the number of observed cases is identical, just shifted by τ , whereas the final fraction of susceptible individuals is vastly different. indeed, in the first case the epidemic is stopped by group immunity whereas in the second case effective mitigation measures are imposed. correspondingly, police implications would be vastly different in the two situations even though they are observationally indistinguishable. instead of detailed modeling of epidemic dynamics, which is further complicated due policy actions requiring flexible models with delicately chosen parameters, the present analysis is based on visual inspection of the reported data. overall, relative case and deaths counts (observed for country c) seem to be related as follows: where a c r denotes the actual infections a fraction α c ∈ [0, 1] is observed. a suitable reporting delay τ c can be estimated by visual inspection of the data, but again the fraction of observed cases α c and cfr cfr are not jointly identifiable if there exist sets of parameters such that a t−τ = αa t , as is the case for dynamic sir type models. in the end, any epidemic modeling implicitly or explicitly chooses a parametric form for the latent growth process a t and will not be identified if sufficiently flexible. yet, assumption three of a constant cfr across all countries allows to derive 1. a range of values consistent among all countries, 2. as well as recover the corresponding fraction of observed cases in each country. thereby, assuming a reasonable true cfr value, i.e. from the model implied range 0.1% to 2% which is also consistent with current knowledge, and using the estimated delay, the actual case numbers can be reconstructed. figure 7 shows the resulting actual relative infection counts across several countries. note that despite the simplicity of this analysis, the estimated numbers compare favorable [4] . indeed, i would rather trust these even more as they do not rely on complex modeling assumptions but follow from visual inspection of the data. overall, i have illustrated that much of the variability between observed case and deaths counts between different countries can be explained by two parameters. namely, the reporting delay τ and the fraction of observed cases. especially the reporting delay exhibits crucial differences between countries and needs to be taken into account when comparing data and planning actions. in particular, containment is challenging when long incubation times are involved [1] but a combination of case tracing and isolation policies could be effective [5, 6] . thus, detailed epidemic modeling is certainly needed in order to judge the effectiveness of current mitigation measures across different countries [4, 2] . on the other hand, important parameters need to fixed based on additional knowledge as they cannot be identified within sufficiently flexible models. in the end, data analysis and detailed modeling alone only gets us only that far and more extensive testing is urgently needed to obtain reliable knowledge about the current progression of the covid-19 pandemic. figure s1 : aligned data as in figure 2 , but time is additionally re-scaled to match local growth rate of the epidemics. a data collapse by re-scaling time aligning the data as in figure 2 still shows country-specific differences in the temporal course of epidemic spreading. much of this difference can be attributed to the speed at which the epidemic spreads in different countries. estimating the local growth rate of deaths d log dt dt by the three day running average of observed changes log d t+1 − log d t , relative time, i.e. relative to the threshold of total deaths reached, is re-scaled to match local growth rates. figure s1 shows the resulting data collapse for d t and the corresponding c t dynamics. further, taking the estimated relation between cases and deaths via cfr and country specific delays into account an almost complete data collapse for the cases is obtained. not that as in the main text, data are aligned according to relative death counts only. furthermore, the temporal re-scaling is based on the estimated growth rate from the death counts as well. yet, shifting and scaling case data according to the estimated country specific delay and fraction of observed cases leads to an almost complete data collapse as well. as individual countries can be hard to identify in figures 2 and 5 , the ny times featured panel views where each country is highlighted above a background of all countries. here, i provide similar figures for relative death and case counts using a threshold of two deaths per million inhabitants. note that an sir model already includes a natural delay between infections and recovery (or death). indeed, the total number of cases is given by c t = i t + r t while the cumulative death toll is obtained as cfrr t , i.e. modeling that a fraction of individuals does not recover but dies instead. assuming that only a fraction α of cases is observed, the model is estimated with the following relative days since two death per mill. relative count figure s4 : details of aligned and adjusted case counts for threshold of two deaths per million. sampling distribution thus, observed daily changes are related to the model implement changes via an over-dispersed poisson aka negative binomial distribution. figure s5 shows the resulting estimates assuming β t = β 1 + (β 2 − β 1 )σ( t−τ t ) and cfr = 1% 6 . the sir model assuming a single change point in the infectivity, via the logistic sigmoid sigma(·) in β t reflecting the implementation of social distancing is clearly able to capture the epidemic dynamics. yet, parameter uncertainties, especially about the reporting delay can be large 7 . bayesian estimates have been carried out using stan (full code available from my https://github.com/bertschi/covid repository) and using weakly informative broad normal or student-t prior distributions on all parameters. 6 due to the non-identifiability derived in the main text either α or cfr needs to be fixed. 7 the high uncertainty could also reflect that an sir dynamics is misspecified in that it corresponds to an exponential delay distribution. such additional model assumptions need to be carefully chosen in order to obtain meaningful parameter estimates. figure s5 : model predictions and estimated parameters from sir model fitted to data from italy (top) and germany (bottom). mitigation and herd immunity strategy for covid-19 is likely to fail. medrxiv inferring covid-19 spreading rates and potential change points for case number forecasts impact of nonpharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand estimating the number of infections and the impact of non factors that make an infectious disease outbreak controllable a retrospective bayesian model for measuring covariate effects on observed covid-19 test and case counts substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov2) fundamental principles of epidemic spread highlight the immediate need for large-scale serological surveys to assess the stage of the sars-cov-2 epidemic. medrxiv effective containment explains subexponential growth in confirmed cases of recent covid-19 outbreak in mainland china scaling, universality, and renormalization: three pillars of modern critical phenomena rational evaluation of various epidemic models based on the covid-19 data of china. medrxiv modeling the epidemic dynamics and control of covid-19 outbreak in china. medrxiv key: cord-020544-kc52thr8 authors: bradt, david a.; drummond, christina m. title: technical annexes date: 2019-12-03 journal: pocket field guide for disaster health professionals doi: 10.1007/978-3-030-04801-3_7 sha: doc_id: 20544 cord_uid: kc52thr8 7.1 humanitarian programs 141; 7.2 security sector 153; 7.3 health sector 158: core disciplines in disaster health 161. primary health care programs 162. disease prevention 162. clinical facilities 164. reproductive health 165. water and sanitation 166. food and nutrition 171. chemical weapons 181. epi methods 184; 7.4 tropical medicine 187: tropical infectious diseases—vector-borne and zoonotic 196. tropical infectious diseases—non-vector-borne 215; 7.5 epidemic preparedness and response 239; 7.6 communicable disease control 242: diarrhea 244. influenza 257. malaria 263. measles 267. meningitis 269. viral hemorrhagic fever 272; 7.7 diagnostic laboratory 275: indications, laboratory tests, and expected availability 276. specimen handling 278; 7.8 acronyms 282; this section provides guidance on technical issues in the health sector. the annexes contain compilations of frequently used reference information. • humanitarian programs-contains conceptual frameworks on global clusters, relief programs, humanitarian financing, and early recovery. • security sector-contains key definitions from the rome statute of the international criminal court • health sector-contains a broad range of core health technical information including environmental classification of water and excreta-related diseases, disease prevention measures, water treatment end points, anthropometric classifications, micronutrient deficiency states, management of chemical weapon exposures, and epi methods. • tropical medicine-contains clinical summaries of tropical infectious diseases with details on disease vector and host, clinical presentation, diagnostic lab tests, clinical epidemiology, and therapy. • epidemic preparedness and response-contains core principles of epidemic preparedness and response. • communicable disease control-contains an overview of selected communicable diseases of epidemic potential including diarrhea, influenza, malaria, measles, meningitis, and viral hemorrhagic fever. • diagnostic laboratory-contains guidance on lab specimen handling and testing. • acronyms-contains acronyms commonly used in disaster management and humanitarian assistance. a. in-kind donations (eg food, seeds, tools, fishing nets, etc) b. types of community projects in food-for-assets programs (1) natural resources development (a) water harvesting (b) soil conservation (2) restoration of agri(aqua)culture potential (a) irrigation systems (b) seed systems (3) infrastructure rehabilitation (a) schools (b) market places (c) community granaries (d) warehouses (e) roads (f) bridges (4) diversification of livelihoods (a) training and experience sharing 2. increase individual purchasing power a. cash distribution b. cash for work (cash for assets) c. vouchers d. micro-credit e. job fairs f . artisanal production g. livelihoods/income generation 3. support market resumption a. market rehabilitation b. infrastructure rehabilitation c. micro-finance institutions goals-protect what's left (1 month), restore the system (6 months), improve the system ( 1. promote transformational development support far-reaching, fundamental changes in relatively stable developing countries, with emphasis on improvements in governance and institutions, human capacity, and economic structure, so that countries can sustain further economic and social progress without depending on foreign aid. focus on those countries with significant need for assistance and with adequate (or better) commitment to ruling justly, promoting economic freedom, and investing in people. reduce fragility and establish the foundation for development progress by supporting stabilization, reform, and capacity development in fragile states when and where u.s. assistance can make a significant difference. 3. support strategic states help achieve major u.s. foreign policy goals in specific countries of high priority from a strategic standpoint. 1. international cooperation to protect lives and health 2. timely and sustained high-level political leadership to the disease 3. transparency in reporting of cases of disease in humans and in animals caused by strains that have pandemic potential to increase understanding, enhance preparedness, and ensure rapid and timely response to potential outbreaks 4. immediate sharing of epidemiological data and clinical samples with the world health organization (who) and the international community to characterize the nature and evolution of any outbreaks as quickly as possible 5. prevention and containment of an incipient epidemic through capacity building and in-country collaboration with international partners 6. rapid response to the first signs of accelerated disease transmission 7. work in a manner supportive of key multilateral organizations (who, fao, oie) 8. timely coordination of bilateral and multilateral resource allocations; dedication of domestic resources (human and financial); improvements in public awareness; and development of economic and trade contingency plans 9. increased coordination and harmonization of preparedness, prevention, response and containment activities among nations 10. actions based on the best available science 1. genocide (article 6)-acts committed with intent to destroy, in whole or in part, a national, ethnic, racial, or religious group a. killing members of the group b. causing serious bodily or mental harm to members of the group c. inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part d. imposing measures intended to prevent births within the group e. forcibly transferring children of the group to another group 2. crimes against humanity (article 7)-acts committed as part of a widespread or systematic attack against any civilian population, with knowledge of the attack a. murder b. extermination c. enslavement d. deportation e. imprisonment in violation of international law f. torture g. rape, sexual slavery, enforced prostitution, forced pregnancy, enforced sterilization, or other comparable form of sexual violence h. persecution on political, racial, national, ethnic, cultural, religious, gender, or other grounds universally recognized as impermissible under international law i. enforced disappearance j. apartheid k. other inhumane acts intentionally causing great suffering or serious injury to body or to mental or physical health 3. war crimes (article 8) a. grave breaches of the geneva conventions of 12 aug 1949 (1) willful killing (2) torture or inhumane treatment including biological experiments (3) willfully causing great suffering (4) extensive destruction and appropriation of property (5) compelling a pow to serve in the armed forces of a hostile power (6) willfully depriving a pow of the right to a fair trial (7) unlawful deportation (8) taking of hostages b. serious violations of laws and customs applicable in international armed conflict (1) intentionally directing attacks against the civilian population or against civilians not taking direct part in hostilities (2) intentionally directing attacks against civilian objects (3) intentionally directing attacks against personnel, installations, material, units, or vehicles involved in humanitarian assistance or peacekeeping mission (4) intentionally launching an attack in the knowledge that it will cause incidental civilian loss of life or severe damage to the natural environment (5) attacking undefended towns, villages, dwellings, or buildings which are not military targets (6) killing or wounding a combatant who has surrendered (7) improper use of a flag of truce, flag or insignia or uniform of the enemy or of the un, or emblems of the geneva conventions resulting in death or serious personal injury (8) transfer by the occupying power of parts of its own civilian population into the territory it occupies, or the deportation or transfer of all or parts of the population of the occupied territory within or outside the territory (9) intentionally directing attacks against buildings dedicated to religion, education, art, science, charitable purposes, historic monuments, hospitals, and places where sick are collected, provided they are not military objectives (10) subjecting persons to physical mutilation or to medical or scientific experiments which are not justified by the medical treatment nor carried out in his/her interest (11) killing or wounding treacherously individuals belonging to the hostile nation or army (12) declaring that no quarter will be given (13) destroying or seizing the enemy's property unless such be imperatively demanded by the necessities of war (14) declaring abolished, suspended, or inadmissible in a court of law the rights and actions of the nationals of the hostile party (15) compelling the nationals of the hostile party to take part in the operations of war directed against their own country (16) pillaging a town or place, even when taken by assault (17) a range of generic prevention measures should be considered for its impact on diseases in a biological "all-hazards" environment. overall, excreta disposal, water quantity, personal hygiene, and food hygiene commonly contribute more to environmental health than do other listed measures. epidemic threats will oblige heightened consideration of disease-specific strategies for prevention and control. c. water treatment (bold text of particular relevance in clinical facilities) 1 ppm = 1 mg/kg (solids) = 1 mg/l (liquids) = 1 ug/ml (liquids) = basic unit of measure for chloroscopes :10,000 ppm = 1% • sam = whz < −3, muac <11.5 cm, or bilateral pitting edema (who). whm not in definition. • sam prevalence worldwide ≈ 20,000,000. • sam mortality ≈ 9x mortality of normally nourished child and its cfr can be 10-50%. • gam = mam + sam • gam = moderate wasting cases, severe wasting cases, or bilateral pitting edema cases (where due to malnutrition) • underweight is not used for screening or surveys in nutritional emergencies. it reflects past (chronic) and present (acute) undernutrition and is unable to distinguish between them. it encompasses children who are wasted and/or stunted. however, weight gain over time can be a sensitive indicator of growth faltering which is easily tracked on road to health charts. • stunting generally occurs before age 2. it is irreversible. • stunting prevalence worldwide ≈ 165,000,000. • stunting is not a good predictor of mortality, but the cfr from ids in cases of severe stunting ≈ 3x the cfr from ids in cases without stunting. reference standards can be absolute muac, centile, % of median reference, or z scores: • muac easy to understand. an excellent predictor of mortality. permits comparisons between age groups insofar as the low growth velocity of muac in the u5 age group makes data roughly comparable. may be used alone in "quick-and-dirty" convenience samples to estimate local prevalence of wasting. however, not used alone in authoritative anthropometric surveys, and is commonly part of a two stage screening process to determine eligibility for feeding programs. • overall whz gives higher prevalence of malnutrition than whm for the same population. this is most marked where there is low baseline prevalence of disease, and especially for adolescents (who get subsequently over-referred). whz is more statistically valid, but whm is better predictor of mortality and is used for admission to tfcs. weight-for-age is influenced by weight-for-height and height-for-age. it can be difficult to interpret. b. adults and adolescents (o10) anthropometrics: bmi = weight (kg) / height (m) 2 1. death rates-calculated incidence of death expressed per 10,000 p/d or per 1000 p/mo; data collected by retrospective surveys (eg 3 month period) to gauge severity of public health emergency particularly where sudden events lead to spike in mortality a. cdr-crude death rate b. asdr-age-specific death rate (eg u5dr or death rate of children 0-5 yr) during a studied time interval (written as 2. mortality rates-calculated probability of dying before a specified age expressed per 1000 live births; data collected by national health authorities in periodic (annual) demographic surveys to reflect ongoing health status a. cmr-calculated probability of mortality in given population for specific time b. imr-calculated probability of a live borne child dying before 1 yr c. u5mr-calculated probability of a live borne child dying before 5 yr nb mr ≠ dr. eg cmr ≠ cdr, u5mr ≠ u5dr. different rates measure different things and are not directly comparable. however, mrs may be converted into drs by the following: cdr or u5dr (deaths/10,000/d) = − ln(1−p/1000) × 5.47 where p = cmr or u5mr (deaths/1000 live births). however, this has little field utility. nb mmr-maternal mortality ratio has different units in numerators (maternal deaths) and denominators (live births), thus is a ratio, not a rate the application of study findings to an entire population from which the sample was drawn. if the survey was well-conducted, the results may be considered representative of the entire population. this is scientifically justified. however a confidence interval should accompany any parameter estimate of that population. extrapolation the extension of study findings to a population or period which was not represented in the sample. it works by association-if 2 populations appear to be experiencing similar conditions, the morbidity/mortality experience of one may be imputed to the other. this is not scientifically justified, but is often done where data are insufficient or impossible to collect. 6 s/sx think differential diagnosis (below). 2. severe muscle pain may be a symptom of sepsis even without fever. 3. elderly patients with sepsis may be afebrile. in elderly patients, fever is rarely caused by a viral infection. 4 . septic patients who are hypothermic have a worse prognosis than those with high fever. treat as a medical emergency. 5. fever in a postoperative patient is usually related to the surgical procedure (eg pneumonia, uti, wound, or deep infection). 6 . fever with jaundice is rarely due to viral hepatitis. think liver abscess, cholangitis, etc. 7. the rash of early meningococcal infection may resemble a viral rash. 8. generalized rashes involving the palms and soles may be due to drugs, viral infections, rickettsial infections, or syphilis. 9. all febrile travelers in or returned from a malaria infected area must have malaria excluded. 10. disseminated tb must be suspected in all elderly patients with fever and multisystem disease who have been in an area with endemic tb. 11. septic arthritis may be present even in a joint which is mobile. 12. back pain with fever may be caused by vertebral osteomyelitis or an epidural abscess. 13. a patient may have more than one infection requiring treatment (eg malaria and typhoid), especially if they are elderly, immunosuppressed, or have travelled. 14. always remember common infections, not just opportunistic infections, in aids patients with a fever. understand morbidity multipliers-measles, malnutrition, and tb/hiv. understand occult co-morbidities. for any undifferentiated illness, even in infants, think of hiv, tb, syphilis, and sarcoid. for any child, think of malaria, hookworm, and anemia. malarial anemia usually in pedes <3 year-old; hookworm anemia usually in pedes >3 year-old. for any icp, think of tb, vl, histoplasmosis, and strongyloides. must treat early. watch for clinical mimics-malaria presenting as pneumonia or diarrhea in pedes; vl presenting as malaria in adults; lepto presenting as mild df (esp in df endemic areas where the pt has mild onset of illness, worsening course, and no rash but jaundice). tx do basic things well, use equipment you understand, teach others, delegate. this annex profiles selected communicable diseases of epidemic potential whose incidence, management complexity, or mortality obliges particular attention. • if (+) agglutination to o1 antisera, then the strain is further tested for agglutination to antiserum of ogawa and inaba serotypes. • if (+) agglutination to o139 antisera, then the strain is not further subdivided (except as producer or non-producer of ct as noted below). • if (−) agglutination to o1 and o139 antisera, then the strain is known as non-o1, non-o139 v. cholerae. a strain is further identified as a producer or non-producer of cholera toxin (ct). ct production is a major determinant of disease development. strains lacking ct do not produce epidemics even if from the o1 or o139 serogroup. • serogroup o1 exists as 2 main biotypes-classical and el tor-though hybrids also exist. each biotype occurs as two serotypes-ogawa and inaba. classic biotype caused the 5th and 6th pandemics but little epidemic disease since the 1970s though it still causes cases in india. el tor biotype caused the 7th (current) pandemic and almost all recent outbreaks. el tor was first isolated in 1905 in el tor, egypt after importation by indonesian pilgrims travelling to mecca. it survives longer in the environment and produces ct similar to the classical biotype. presumably because of ct pathogenicity, the % of cholera patients with severe disease has doubled over the past 10 yrs. these patients tend to require iv fluid therapy. • serogroup o139 may have evolved from strains of o1 el tor as they share many properties though not agglutination. in spring of 2002 in dhaka, o139 cases exceeded o1 el tor cases for the first time, and it was postulated that o139 may become the cause of an 8th pandemic. however, since then, o1 has again become dominant. infective dose depends on individual susceptibility. relevant host factors include immunity produced by prior infection with serogroup o1 as well as stomach acidity. id 50 may be 100,000 orgs, so personal hygiene plays a lesser role than in shigellosis where the id 50 is much lower. shigella has 4 species. • s. dysenteriae type 1 (sd1 or shiga bacillus) causes the severest disease of all shigella sp because of its neurotoxin (shiga toxin), longer duration of illness, higher abx resistance, higher cfr thru invasive complications, and great epidemic potential. • s. flexneri is the most common, and is generally endemic, in developing countries • s. sonnei is the most common in industrial countries • s. boydii and s. sonnei give mild disease. some kinds of e. coli produce a shiga toxin. shiga toxin genes reside in a bacteriophage genome integrated into the bacterial chromosome. some abx, eg fluoroquinolones, induce expression of phage genes. the bacteria that make these toxins are variously called "shiga toxin-producing e. coli" (stec), "enterohemorrhagic e. coli" (ehec), or "verocytotoxic e. coli" (vtec). all terms refer to the same group of bacteria. • e. coli o157:h7 (often called "e. coli o157" or "o157") is the most commonly identified stec in north america, and it causes most e. coli outbreaks. approximately 5-10% of ehec infections result in hus. • non-o157 stec serogroups also cause disease. in the usa, serogroups o26, o111, and o103 are the most commonly identified e. coli pathogens overall. weather (esp weeks 15-20 in apr-may) creating increased biological activity; post-monsoon (esp weeks 30-40 in aug-sep) with contamination of water sources. pre-monsoon epidemics are generally worse than postmonsoon ones. dysentery has low level year-round incidence, but epidemics occur roughly each decade. epidemic strains display new, additive antibiotic resistance which probably triggers the epidemic. once resistant strains have become endemic, antibiotic susceptibility rarely reappears. sd1 acquires resistance quickly. sf acquires it more slowly, and that resistance may wane with decreasing abx pressure. at icddr, annual proportional incidence approximates the following: clean water and waste management especially for cholera. personal hygiene (hand washing with soap and clean towels) especially for shigella. water safe drinking water (boiled, chlorinated) nb sphere standards are not enough-you need increased quantities of chlorinated water at household level. san clean latrines for safe disposal of excreta hand washing with soap food safe food (cooked, stored) breast feeding fomites safe disposal of dead bodies with disinfection of clothing nb after outbreak of a fecal-oral pathogen, food hygiene and funereal practices may influence human-to-human transmission more than water quality. health education to affected population wash hands with soap: after using toilets/latrines. after disposing of children's feces. before preparing food. before eating. before feeding children. dukoral has been the main vaccine considered for use in high-risk populations. • morc-vax and shanchol-similar to dukoral except they do not contain the rbs, hence do not require a buffer, and are 1/3 the cost to produce. morc-vax, produced in vietnam, is derived from a vaccine administered to millions of people since 1997, but is not who pre-qualified, and is not expected to have international distribution. • shanchol, produced in india, has international distribution (eg used in the haiti cholera vaccination campaign of 2012), and is now the agent of choice for who. it confers immunity 10d p 2nd dose, effectiveness > 85% at 6 mo, and protection >50% at 5 yr. also confers short-term protection vs etec. dose: 1.5 cc vaccine followed by water ingestion but no fasting needed; 2 doses, 2 wks apart; cold chain required except for day of use. • orochol-bivalent formulation as in dukoral without rbs of ct. dose: single dose. no longer manufactured. who recommendations: "vaccination should not disrupt the provision of other high-priority health interventions to control or prevent cholera outbreaks. vaccines provide a short-term effect that can be implemented to bring about an immediate response while the longer term interventions of improving water and sanitation, which involve large investments, are put into place." [1] icddr recommendations: "because of limitations in terms of transport, formulation, and cost of the current dukoral vaccine, the cots program does not require the utilization of the vaccine during an outbreak; it is not necessary to vaccinate to overcome an outbreak. however, if dukoral is readily available and staff are properly trained in its use according to the guidelines that come with the vaccine, the cots program permits dukoral's use (ideally before an outbreak) in the following high-risk populations: refugee populations in which cholera is present, health care workers managing cholera cases, and communities in which the incidence rate is greater than 1 in 1000 annually." [2] epidemiological surveillance (specific to cholera) epidemiological assumptions (who, cots): estimated attack rates: 10-20% extremely vulnerable hosts and poor environmental health (who) 5% (refugee camps with malnutrition) (cots) 2% (rural communities of <5000 p) (cots) 1% (severe epidemic-good estimate of ultimate disease burden) (who) 0.6% (endemic areas with bad sanitation) (cots) 0.2% (endemic areas in open settings-suitable for initial calculations of early resource requirements) nb overall, 90% of cases are mild and difficult to distinguish from other types of d. nb asymptomatic carriers are very common (10x # of cases). referral rates for ivs 20% of cases (much higher-70% at icddr as it shortens recovery time) case fatality ratios 1% (with good care) the following catchment populations will yield 100 acute pts of whom 20 will be severely dehydrated: refugee camp of 2000 people (ar of 5% = 100 pts) open settings in endemic area with 50,000 people (ar 0.2% = 100 pts) a population of 100,000 infected individuals in an epidemic area will yield the following (who): population infected 100,000 clinical cases 1,000 (1% of infected population) cases needing early resources 200 (20% of cases) cases needing iv therapy 200 (20% of cases) anticipated deaths 10 (1% cfr) nb in non-endemic areas, ar adults > ar pedes because adults have higher exposure risks. in endemic areas, ar pedes > ar adults because adults have been exposed since childhood delivery of health services shigella are fragile and difficult to recover if transport time > 1 d. 5-10 isolates initially to confirm outbreak 30-50 isolates initially to create abx use policy (bacterial resistance renders cotrimoxazole, amp/amox, nalidixic acid, and tetracycline unusable) 20-30 isolates monthly from ipd and opd before abx therapy to assess evolving abx resistance 10-20 isolates periodically to reference laboratory to confirm abx resistance patterns and undertake molecular studies 20 isolates at end of the outbreak to confirm that new diarrheas are not epidemic pathogens nb systematic sampling is most representative-eg every 10th pt or all pts q 2 weeks adjusted as needed to collect the necessary specs. sensitivity > > important than specificity in rdt screening during an epidemic. pts from one geographic area are more likely to constitute a cluster involving a new pathogen. an area may be considered cholera-free after 2 incubation periods (total of 10 d) have passed without cholera disease. however, hospital monitoring should continue for a year due to tendency of enteric pathogens to re-emerge long after they are declared gone. cholera may be viable but nonculturable from the environment; environmental monitoring has many false negatives. consider improvements to existing diagnostic labs • hotline set up for reporting of rumor this often translates into a hastily conceived vaccination campaign that distracts from core principles of cholera management. for every symptomatic pt, there may be 90 asymptomatic carriers. in an established epidemic, the affected community is already extensively infected. cholera vaccination, under these circumstances, has little public health benefit for the resource investment. if undertaken, the following will apply: • vaccination campaign requires numerous staff. community mobilizers are key. clinical staff should not be poached from their clinical duties. supervisors must be free to move at will. • logistics is key-if the 1st day goes badly, the campaign goes badly. • mark the domiciles which are done. • hold after-action meetings each day. • last day, use mobilizers with mobile broadcasting to attract those who missed out. • second phase vaccination should include chws with multi-purpose messages on water and sanitation. avoid: press exaggeration abx prophylaxis reliance on ivf and insufficient ors lab investigation of cases once epidemic etiology is ascertained prolonged hospitalization hospital discharge criteria requiring multiple negative stool cultures enthusiasm for ocv during epidemic exaggerated water purification objectives concentration of technical competencies in moh at expense of districts failure to share information with stakeholders influenza viruses comprise 3 genera-influenza types a, b, and c-each with 1 species. • influenza type a is divided into subtypes based upon serological response to hemagglutinin (ha) and neuraminidase (na) glycoproteins. there are 16 different ha subtypes and 9 different na subtypes. h1n1, h2n2, and h3n2 are responsible for the major human pandemics in the last century. h2n2 virus circulated between 1957 and 1968 but currently does not. only influenza a subtypes infect birds, and all subtypes can do so. bird flu viruses do not usually infect humans. but, in 1997, an outbreak of h5n1 avian influenza in poultry in hong kong marked the first known direct human transmission of avian influenza virus from birds to humans. since then, h5, h7, and h9 avian influenza subtypes have been shown to infect humans. • influenza type b is morphologically similar to a and also creates seasonal and epidemic disease. • influenza type c is rare but can cause local epidemics. seasonal human influenza vaccine currently has 3 strains-h1n1/h3n2/b. influenza disease in humans has a short incubation period (1-3 d). early symptoms are non-specific. it is highly infectious, especially early in the course of the disease, with a large # of asymptomatic carriers. transmission potential (r 0 ) is a function of infectivity, period of contagiousness, daily contact rate, and host immunity. in general, the faster the transmission, the less feasible is interrupting transmission thru usual disease control tools of case finding, isolation, contact tracing, and ring vaccination. • specific groups of exposed or at risk in the community-most likely to work when there is limited disease transmission in the area, most cases can be traced to a specific contact or setting, and intervention is considered likely to slow the spread of disease eg quarantine of groups of people at known common source exposure (airplane, school, workplace, hospital, public gathering; ensure delivery of medical care, food, and social services to persons in quarantine with special attention to vulnerable groups) (useless once there is community-based spread) eg containment measures at specific sites or buildings of disease exposure (focused measures to > social distance) cancel public events (concerts, sports, movies) close buildings (recreational facilities, youth clubs) restrict access to certain sites or buildings • community-wide measures (affecting exposed and non-exposed)most likely to work where there is moderate to extensive disease transmission in the area, many cases cannot be traced, cases are increasing, and there is delay between sx onset and case isolation. infection control measures ari etiquette-cover nose/mouth during cough or sneeze, use tissues, wash hands avoidance of public gatherings by persons at high risk of complications nb use of masks by well persons is not recommended "snow" (stay-at-home) days and self-shielding (reverse quarantine) for initial 10 d period of community outbreak-may reduce transmission without explicit activity restrictions closure of schools, offices, large group gatherings, public transport (pedes more likely to transmit disease than adults) nb community quarantine (cordon sanitaire)-restriction of travel in and out of an area is unlikely to prevent introduction or spread of disease anopheles vector biology egg becomes adult mosquito in 9 d adult mosquito becomes infective in 12 d after bite on infected host susceptible human host becomes infective in 9 d after bite from infected mosquito :. earliest human clinical disease in 30 d after eggs are laid follow the 4-d rule: dusk and dawn stay indoors as much as possible with window screens in good repair dress in light colored long sleeve shirts and long pants when outside identify cause of the outbreak undertake vaccination campaign strengthen routine immunization and surveillance meningitis is a disease with significant mortality. meningococcus (neisseria meningitides) is renown for its rapid onset, rapid progression (death sometimes within hours), and high mortality (50% untreated). there are 13 serogroups of neisseria meningitides but only 6 (a, b, c, w, x, y) are known to cause epidemics. the bacteria spread from person to person via respiratory and nasal secretions. polysaccharide vaccines are available with 2 serotypes (a and c), 3 serotypes (a, c, and w) or 4 serotypes (a, c, w, and y). duration of immunity is approximately 3 years. meningococcal protein conjugate vaccines confer longer immunity but at higher cost than polysaccharide vaccines. monovalent conjugate vaccine against group c dates from 1999, and tetravalent (a, c, w, and y) conjugate vaccine dates from 2005. group b vaccine made from 4 bacterial proteins has been licensed since 2014 but is not readily available. meningococcal vaccines have a very low incidence of side effects. regular disease surveillance is necessary to detect outbreaks. the epidemic threshold is 10 suspected cases/ 100,000 population in any given week. two suspected cases of meningitis in the same settlement should trigger an outbreak investigation. nasopharyngeal carriage rates do not predict epidemics. 80-85% of meningococcal disease presents with meningitis. 80% of cases occur in patients <30 y/o. peak incidence in meningitis belt is ages 5-10 yrs. diagnosis is straightforward when patient presents with signs of meningitis-fever, headache, vomiting, changes in mental status. however, most patients have non-specific illness 1-3 days before onset of meningitis. cfr of untreated meningococcal meningitis can be 50%. cfr of properly treated meningococcal meningitis is <1%. 15-20% of meningococcal disease presents with septicemia unaccompanied by meningitis or other focal features. it is a dramatic illness which affects previously healthy children and young adults. it presents with acute fever leading to purpura fulminans (hemorrhagic or purpuric rash), shock, and waterhouse-friderichsen syndrome (acute adrenal failure). etiologic diagnosis can be easily missed. cfr of meningococcal septicemia is 50% and may be 25% even with proper treatment. diagnosis may be confirmed by agglutination tests, polymerase chain reaction, culture and sensitivity testing of spinal fluid and blood. in many situations, these tests are not available. throat swabs may be helpful on occasions. do not delay treatment for tests or test results. minutes count. it is more important to have a live patient without a confirmed diagnosis than a dead one with a diagnosis. differential diagnosis in a tropical patient with fever and altered mental status, but without purpura or shock, includes cerebral malaria. co-infection may occur. standardized case management of bacterial meningitis in developed countries involves 7-10 days of parenteral antibiotic therapy. drug of choice in adults and older children is ceftriaxone which also rapidly eliminates the carrier state. alternate drugs include ampicillin and benzylpenicillin which do not eliminate the carrier state. in developing countries, 4 days of parenteral antibiotic therapy are empirically shown to be effective. in large epidemics in resource-poor settings, a single im dose of chloramphenicol in oil is the drug of choice. for patients who do not improve in 48 hours, a repeat dose may be given. viral meningitis is rarely serious and requires only supportive care, recovery is usually complete. patient isolation and disinfection of the room, clothing, or bedding are not necessary. respiratory precautions are advised particularly early in the course of treatment. chemoprophylaxis of contacts is available in some settings but rarely in the disaster setting. vigilance and education of close contacts is mandatory. epidemic preparedness and early detection of outbreaks are key. vaccines against n. meningitides serogroups a, c, y and w135 are very effective in controlling epidemics. in epidemic settings, children 2-10 are the priority target with serogroups a and c typically the priority antigens. rapid mass vaccination campaigns can contain outbreaks in 2-3 weeks. for immunocompetent patients over 2 years, vaccine efficacy rate is 90% one week after injection. however, duration of immunity may be as little as 2 years in younger children. in some countries, vaccine may also be used with close contacts of sporadic disease cases to prevent secondary cases. chemoprophylaxis of contacts is not recommended in epidemics, but community education and ready access to health care are essential. preventive medicine [1] source control/reduction/elimination undertake quarantine and culling of sick reservoir animals and known disease carrier species. avoid unnecessary contact with or consumption of dead reservoir animals or known disease carrier species. avoid unnecessary contact with suspected reservoir animals and known disease carrier species (eg primates). avoid direct or close contact with symptomatic patients. establish appropriate communicable disease controls for burial of the dead. administrative controls (improve people's work practices) environmental and engineering controls (isolate people from the hazard) avoid needle stick exposure to blood specimens thru automated machine handling. ppe (protect people with ppe) use standard precautions-gloves, masks, and protective clothing-if handling infected animals or patients. wash hands after visiting sick patients. active surveillance and contact tracing (enhanced surveillance) through community-based mobile teams active case finding (screening and triage) and contact tracing dedicated isolation facility food provision to isolated patients so they are not dependent on family case definition treatment protocols emphasizing supportive care and treatment of complications essential drugs referral guidelines secondary prevention barrier nursing strictly enforced family and community education ministerial task force to address policy local health authority task force to address procedures national level task forces to comprise if a lab is not available, then you need a sampling strategy that addresses specimen acquisition, preparation, and transportation in compliance with international regulations on the transport of infectious substances. guidance note on using the cluster approach to strengthen humanitarian response international conference on primary health care selective primary health care-an interim strategy for disease control in developing countries water and excreta-related diseases: unitary environmental classification infections related to water and excreta: the health dimension of the decade world health organization. cholera vaccines: who position paper available from: international centre for diarrhoeal disease research history and epidemiology of global smallpox eradication available from: us department of health and human services communicable disease control in emergencies-a field manual. geneva: world health organization ebola: technical guidance documents for medical staff world health organization. manual for the care and management of patients in ebola care units/community care centers-interim emergency guidance. who/ evd/manual/ecu/15.1. geneva: world health organization what tests does it perform? is there transport to and from the laboratory? who prepares transport media? who provides specimen collection material and supplies? how can these supplies be obtained? who provides cool packs, transport boxes, car, driver …? • refrigerate other vials for cytology, chemistry (4 °c) leak-proof specimen container wrapped with enough absorbent material to absorb the entire content of the 1st container 2. leak-proof secondary container usually plastic or metal 3. outer shipping container whose smallest dimension is 10 mm diagnostic specimens use iata packing instruction 650 without biohazard label. infectious materials use iata packing instruction 602 with biohazard label. what to send with the sample? lab request form with: • sender's name and contact info • patient name, age, sex • sample date, time • suspected clinical diagnosis with main signs and symptoms • sample macroscopic description • context-outbreak confirmation, ongoing verification, outbreak end, etc • epidemiological or demographic data where to send the sample? • reference lab • contact person what and when to expect results? source: world health organization world health organization department of communicable disease surveillance and response. highlights of specimen collection in emergency situations. undated 4 . designate a lead official in the lcc. 5. anticipate roles for partner agencies (eg inter-agency and team coordination, disease surveillance, field epidemiological investigation, laboratory identification, case management guideline development, outbreak logistics, public information, and social mobilization). 6. identify sources of funds. 7. intensify disease surveillance. 8 . identify reference lab(s) for communicable diseases of epidemic potential. 9. ensure mechanism for specimen transport. a. initial response to suspected outbreak 1. form an emergency team to investigate and manage the outbreak a. identify key roles on the outbreak investigation team(s) (1) epidemiology and surveillance (2) case management (3) water and sanitation (4) laboratory services (5) communication b. staff those roles (1) epidemiologist-to monitor proper data collection and surveillance procedures (2) physician-to confirm clinical s/sx and train health workers in case management (3) water and sanitation expert-to develop a plan for reducing sources of contamination (4) microbiologist-to take environmental/biological samples for laboratory confirmation, train health workers in proper sampling techniques, and confirm use of appropriate methods in the diagnostic laboratory (5) key: cord-119626-qb6fea06 authors: cruz-aponte, mayte'e; caraballo-cueto, jos'e title: balancing fiscal and mortality impact of sars-cov-2 mitigation measurements date: 2020-06-02 journal: nan doi: nan sha: doc_id: 119626 cord_uid: qb6fea06 an epidemic carries human and fiscal costs. in the case of imported pandemics, the first-best solution is to restrict national borders to identify and isolate infected individuals. however, when that opportunity is not fully seized and there is no preventative intervention available, second-best options must be chosen. in this article we develop a system of differential equations that simulate both the fiscal and human costs associated to different mitigation measurements. after simulating several scenarios, we conclude that herd immunity (or unleashing the pandemic) is the worst policy in terms of both human and fiscal cost. we found that the second-best policy would be a strict policy (e.g. physical distancing with massive testing) established under the first 20 days after the pandemic, that lowers the probability of infection by 80%. in the case of the us, this strict policy would save more than 239 thousands lives and almost $170.8 billion to taxpayers when compared to the herd immunity case. during the covid-19 pandemic, many policymakers are usually facing two separated sources of information: economic models that usually predict an economic collapse [15] and epidemic models that focus on death counts [12] . however, both the economic and mortality figures are key policy variables during a pandemic but few articles integrate both approaches [9, 16] . in particular, no research (to our knowledge) has analyzed both the fiscal and mortality impact of different mitigation measurements. in this article we strive to fill that gap by approximating the impact of physical distancing and patient care on the death toll and government budget, in a attempt to find the optimal conditions to balance it all. vaccination or therapeutics can eradicate epidemics from the population, like the case of smallpox [2, 3] but when a newly discovered virus hits the population, the entire world is at risk because everyone is susceptible as in the case of the novel sars-cov-2 that is impacting us in 2020 [13] . in the case of an imported infection (i.e. not an endemic epidemic), the first-best strategy would be to control borders, identify, treat and isolate infected individuals. this occurred in the u.s. with the ebola virus, which never became an epidemic [7] . but when a virus is already circulating in a territory and there is no antidote or massive testing and contact tracing available, social or physical distancing is an alternative to mitigate a pandemic and provide the scientific community time to research and find alternative measures such as an effective treatment or a vaccine. also, physical distancing measure gives fragile healthcare systems the leverage to take care of chronically ill patients without saturation of existing capacity. what are the fiscal and human costs of all these measurements in the short and long run? thus, two research questions drive this study: what is the optimal physical distancing policies in a country and what are the implications of these policies for both the government budget and loss-of-life? we constructed an enhanced mathematical sir (susceptible, infected, recovered) epidemic model [5] to simulate the covid-19 epidemic in the us in an attempt to estimate the fiscal impact and the optimal conditions to mitigate this ongoing pandemic. we found that a policy of no physical distancing or a race towards herd immunity is not the optimal policy choice when both human and fiscal costs are considered. in section 2 we lay out our methodology. in section 3 we show the dynamics associated to our calibrated system of differential equations. in section 4 we discuss our results and in section 5 we conclude and recommend public policies. we first describe a simple economy with three sectors; businesses, government, and a household sector with two actors. in the second part of this section we describe our epidemic model. in this economy, the household sector is mobile within the country and is composed of l workers and u individuals that are not working. thus, employment is less than full. this characterization allows us to consider the supply shocks associated to the covid-19 pandemic [10] , where laborers are impeded to work fully because of lock-downs or infections affecting members of the household sector. firms produce goods and services i, which require x amount of l. a fixed amount of total output y is predetermined to be produced in period t=0 and is given by, y = x i * l i . however, firms are able to adjust its output when external changes hit the labor stock. the total output that considers the impact of such external changes is observed in, y t = yh t where h t = dl/dt. we hold the following assumptions over h: • if physical distancing is implemented at t=1, h t = −0.3 during the physical distancing. when the physical distancing ends in t=n, and h t=n = 0.1 this setting let us capture the v-shape growth that is being projected [11] in the post-covid-19 period. • if no physical distancing is implemented, the pandemic ends in t=n+j, h t=n+j+1 = 0, and h t