key: cord-284694-bk6bnox0 authors: wang, changsong; kang, kai; gao, yan; ye, ming; lan, xiuwen; li, xueting; zhao, mingyan; yu, kaijiang title: cytokine levels in the body fluids of a patient with covid-19 and acute respiratory distress syndrome: a case report date: 2020-05-12 journal: ann intern med doi: 10.7326/l20-0354 sha: doc_id: 284694 cord_uid: bk6bnox0 nan during this patient's illness to see whether they could help us decide how to modify his treatment as the disease progressed. we found high and fluctuating levels of these cytokines in his peripheral blood, bronchoalveolar lavage fluid, and pleural fluid. however, these levels correlated only inconsistently with the treatments we administered, even for plasmapheresis, which was intended to dilute circulating cytokines, and for a dialysis filter that was designed to adsorb cytokines ( figure) . in addition, these cytokine levels correlated inconsistently with his clinical course, except that the levels increased dramatically in the last days before he died. we suspect that this patient's immune system was partially suppressed due to his advanced age and multiple chronic conditions, which might have contributed to the virus's continued replication and the disease's progress. in addition, the time from symptom onset to confirmation of covid-19 diagnosis was relatively long, the patient's hospital course was longer, and we wonder whether this long duration of viral replication contributed to the high cytokine levels we measured. other studies have reported that patients with covid-19 have evidence of local damage, which includes diffuse alveolar injury with cellular fibrous mucus-like exudates (2). we measured il-6 levels in bronchoalveolar lavage fluid that were higher than the corresponding serum levels. on one occasion (7 march), the il-6 level was approximately 10 times higher. this difference is even more remarkable because the process of collecting bronchoalveolar fluid dilutes the specimen. in addition, the level of il-6 in pleural effusion was higher than the corresponding serum levels on the 2 times we measured it. if these observations indicate a cytokine storm, we propose that the local storm may be worse than the systemic storm. interleukin-6 blockers have been used to treat cytokine storm in patients with other causes of cytokine storm (3), and tocilizumab has been suggested for immunotherapy for severe patients with extensive lung lesions and elevated il-6 levels (3). as a result, we wonder whether tocilizumab would have affected the il-6 levels we observed and whether it might have improved this patient's disease course, especially because others have reported that as covid-19 progresses to its middle and late stages, the expression of inflammatory cytokines is related to the severity of the disease (4). on the basis of our experience, we encourage additional research to determine whether inflammatory cytokines in the lungs predict the clinical course of covid-19 and whether these cytokines should be a target for intervention and treatment. in summary, this case suggests an increased inflammatory response in the lung tissues of critically ill patients with covid-19, and it suggests that future research should include examinations of local inflammation in the lungs. continued on the following page epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in wuhan, china: a descriptive study pathological findings of covid-19 associated with acute respiratory distress syndrome toxicity management for patients receiving novel t-cell engaging therapies clinical features of patients infected with 2019 novel coronavirus in wuhan key: cord-266034-811lov8f authors: benameur, karima; agarwal, ankita; auld, sara c.; butters, matthew p.; webster, andrew s.; ozturk, tugba; howell, j. christina; bassit, leda c.; velasquez, alvaro; schinazi, raymond f.; mullins, mark e.; hu, william t. title: encephalopathy and encephalitis associated with cerebrospinal fluid cytokine alterations and coronavirus disease, atlanta, georgia, usa, 2020 date: 2020-09-17 journal: emerg infect dis doi: 10.3201/eid2609.202122 sha: doc_id: 266034 cord_uid: 811lov8f there are few detailed investigations of neurologic complications in severe acute respiratory syndrome coronavirus 2 infection. we describe 3 patients with laboratory-confirmed coronavirus disease who had encephalopathy and encephalitis develop. neuroimaging showed nonenhancing unilateral, bilateral, and midline changes not readily attributable to vascular causes. all 3 patients had increased cerebrospinal fluid (csf) levels of anti-s1 igm. one patient who died also had increased levels of anti-envelope protein igm. csf analysis also showed markedly increased levels of interleukin (il)-6, il-8, and il-10, but severe acute respiratory syndrome coronavirus 2 was not identified in any csf sample. these changes provide evidence of csf periinfectious/postinfectious inflammatory changes during coronavirus disease with neurologic complications. there are few detailed investigations of neurologic complications in severe acute respiratory syndrome coronavirus 2 infection. we describe 3 patients with laboratory-confirmed coronavirus disease who had encephalopathy and encephalitis develop. neuroimaging showed nonenhancing unilateral, bilateral, and midline changes not readily attributable to vascular causes. all 3 patients had increased cerebrospinal fluid (csf) levels of anti-s1 igm. one patient who died also had increased levels of anti-envelope protein igm. csf analysis also showed markedly increased levels of interleukin (il)-6, il-8, and il-10, but severe acute respiratory syndrome coronavirus 2 was not identified in any csf sample. these changes provide evidence of csf periinfectious/ postinfectious inflammatory changes during coronavirus disease with neurologic complications. elisa with 90% sensitivity and 89% specificity for confirmed covid-19 against 78 pre-2020 controls. csf was serially diluted from 1:2 to 1:16, and csf from 1 case-patient who had hiv infection (hospitalized during march 2020) and from 3 pre-2020 healthy subjects (9) were included for comparison. we measured levels of plasma igg against the receptor-binding domain of s1 by using a commercial elisa (genscript, https:// www.genscript.com) at a 1:16 dilution. we analyzed csf inflammatory proteins (milli-poresigma, https://www.emdmillipore.com) by using a luminex-200 platform and a modified manufacturer's protocol as described (9) . these proteins include interleukin (il)-1α, il-1β, il-2, il-4, il-6, il-7, il-8, il-9, il-10, il12-p40, il12-p70, interferon-gamma-induced protein 10 (ip-10), monocyte chemoattractant protein 1 (mcp-1/ccl2), macrophage-derived chemokine (mdc/ccl22), fractalkine (cx3cl1), and tumor necrosis factor α (tnf-α). we performed molecular testing for sars-cov-2 by using real-time quantitative reverse transcription pcr (qrt-pcr). we extracted total nucleic acid from 120 µl of csf from each person by using the ez1 virus mini kit version 2.0 and the ez1 advanced xl instrument (qiagen, https://www.qiagen.com) after lysis with avl lysis buffer (qiagen). we performed a 1-step qrt-pcr by using 2019-ncov_n1 or 2019-ncov_n2 combined primer/probe mix (integrated dna technologies, inc., https://www.idtdna.com) in a roche lightcycler 480 ii (https://lifescience. roche.com), an endogenous control, and an in vitro transcribed full-length rna of known titer (integrated dna technologies, inc.) as a positive control. we followed the same procedure for influenza a virus except using a primer/probe mixture (10) and a mitochondrial cytochrome oxidase subunit 2 dna endogenous control (11) . we tested all samples in duplicate. patient 1, a 31-year-old african-american woman who had sickle cell disease (scd) and was receiving dabigatran for a recent pulmonary embolus, came to a community hospital after 5 days of progressive dyspnea. an initial chest radiograph showed a right lower lobe infiltrate, and she was given a blood transfusion and antimicrobial drugs for presumed scd crisis and pneumonia. her breathing became more labored, and a repeat chest radiograph showed worsening bilateral infiltrates. a nasopharyngeal swab specimen was positive for sars-cov-2 and influenza a virus (negative for influenza b virus). she was empirically given hydroxychloroquine (400 mg daily) and peramivir (100 mg daily), but acute kidney injury and progressive hypoxemic respiratory failure developed. she was intubated and transferred to our institution on day 11. her paralysis and sedation were discontinued on day 13 after improved oxygenation, but she remained comatose with absent brainstem reflexes on day 15. brain magnetic resonance imaging (mri) showed nonenhancing cerebral edema and diffusion weighted imaging abnormalities predominantly involving the right cerebral hemisphere, as well as brain herniation ( figure 1 ). an occlusive thrombus was identified in the right internal carotid artery, and edema was also identified in the cervical spinal cord. the overall appearance was most consistent with encephalitis and myelitis, with superimposed hypoxic ischemic changes. csf showed high opening pressure of 30 cm of water, 115 nucleated cells/ml, 7,374 erythrocytes/ml, an increased protein level (>200 mg/dl), and a glucose level within a standard range (table) . her nucleated cell count remained strongly increased even after correction for the traumatic tap (≈1 nucleated cell/700 erythrocytes). given a grave prognosis, the family withdrew life-sustaining care and the patient died on day 16. patient 2, a 34-year-old african-american man who had hypertension, showed development of fever, shortness of breath, and cough. computed tomography of the chest showed bilateral, diffuse ground glass infiltrates. a nasopharyngeal swab specimen obtained on day 1 showed sars-cov-2. he was given a 6-day course of hydroxychloroquine, but hypoxic respiratory failure developed, which required intubation, followed by encephalopathy with myoclonus on day 9. his neurologic examination showed profound encephalopathy, absent corneal and gag reflexes, multifocal myoclonus involving both arms, and absent withdrawal to painful stimuli. electroencephalography showed diffuse slowing with a suggestion that the myoclonus was seizure-related. brain mri on day 15 showed a nonenhancing hyperintense lesion within the splenium of the corpus callosum on fluid-attenuated inversion recovery and diffusion weighted imaging sequences ( figure 1 ). csf showed high opening pressure of 48 cm h 2 o, no pleocytosis, 27 erythrocytes/ml, a mildly increased protein level, and glucose level within the reference range. patient 3, a 64-year-old african-american man who had hypertension, showed development of cough, dyspnea, and fever with multifocal, patchy, ground glass opacities on chest computed tomography and a nasopharyngeal swab specimen positive for sars-cov-2. his symptoms progressed to hypoxic respiratory failure requiring intubation, and his multifocal myoclonus began soon after starting to take hydroxychloroquine. his neurologic examination showed profound encephalopathy, absent oculocephalic reflex, multifocal myoclonus affecting bilateral arms and legs, absent withdrawal to pain, and diminished deep tendon reflexes. the resolution of his myoclonus coincided with fentanyl cessation, but it is not clear that the 2 symptoms were related. a motion-degraded brain mri showed an equivocal nonenhancing area of fluid-attenuated inversion recovery abnormality in the right temporal lobe. csf obtained on hospital day 11 showed a normal opening pressure; levels of nucleated cells, erythrocytes, and protein within reference ranges; and an increased glucose level (table) . his mentation began to improve on day 13, and he was subsequently discharged without major neurologic sequelae. plasma anti-s1 receptor-binding domain igg levels were increased for all 3 patients, consistent with severe covid-19 (t. ozturk et al., unpub. data ). an indirect elisa for plasma showed an increased level of anti-s1 igm for patients 1 (1:512) and 2 (1:256), a highly increased level of anti-s1 igm for patient 3 (1:2,048); an increased level of anti-e igm for patients 1 and 2 (1:128), and a standard level of anti-e igm for patient 3. an indirect elisa for csf showed markedly increased levels of igm for sars-cov-2 s1 (figure 2 , panel a) and e (figure 2, panel b) proteins for the most severely ill patient 1, and mildly elevated levels of igm for s1 only for patients 2 and 3. the number of csf erythrocytes in patient 1 suggested plasma contamination at an approximate dilution of 1:1,000, which still 2018 emerging infectious diseases • www.cdc.gov/eid • vol. 26, no. 9, september 2020 placed these csf igm levels higher than those for patients 2 and 3. csf from patients 1 and 3 underwent detailed inflammatory protein profiling as described (9, 12, 13) . when we compared historical and present control subjects who had normal cognition (no viral illness) (13), we found that patients with covid-19 and neurologic symptoms had increased csf levels of il-6, il-8, il-10, ip-10, and tnf-α ( figure 2 , panel c). levels of il-8, il-10, ip-10, and tnf-α were also available for subjects who had hiv-associated neurocognitive disorders (12) . increased levels of il-8 and il-10 appeared to be unique for neurologic complications of sars-cov-2, and increased levels of ip-10 and tnf-α were common features between neurologic complications of sars-cov-2 and hiv. we used a real-time rt-pcr to test for sars-cov-2 and influenza a virus (tested because patient 1 showed a co-infection). results were negative for all patients. we report 3 patients who had severe covid-19 and showed development of various neurologic symptoms and findings in a us hospital. all patients had more severe symptoms affecting cortical and brainstem functions at the peak of their neurologic illnesses than a recent series of 7 case-patients with milder illness in france (6) . all 3 patients were also co-incidentally given a short course of empiric hydroxychloroquine, although there was no temporal correlation between the medication and their neurologic manifestation. similar to the case-series in france, we did not isolate sars-cov-2 rna from csf, although such viral rna has been inconsistently identified in other cases (14) . however, increased levels of csf anti-s1 igm and altered levels of csf cytokines are consistent with direct cns involvement by sars-cov-2. because mri changes seen in these patients could be caused by hypercoagulability (15) or metabolic encephalopathy (16) , we propose that csf investigation can improve the distinction between neurologic involvement of sars-cov-2 (or neuro-covid) and neurologic symptoms caused by other covid-related causes. in health and many noninflammatory neurologic disorders, the intact blood-brain barrier prevents major central translocation by plasma immunoglobulins or cells that secrete them (17) . increased levels of csf antibodies can thus result from disrupted blood-brain barrier, regulated migration of peripheral antibodysecreting cells into the cns, or de novo antibody synthesis within the cns. the relatively normal protein levels in patients 2 and 3 would argue against an unequivocal blood-brain barrier disruption. the lack of clear correlation between plasma and csf titers provides some support for an active cns process. the failure to detect csf sars-cov-2 rna does not diminish the likelihood of direct cns infection because it is only recovered from blood in 1% of the actively infected cases (18) , and increased levels csf igm are also more commonly found as evidence for cns infection than viral recovery in other encephalitides, including those for infection with japanese encephalitis virus (19) , dengue virus (20) , human parvovirus 4 (21) , and rabies virus (22) . at the same time, undetectable csf rna raises the possibility that mechanisms other than direct brain infection might account for the observed mri and clinical changes. these changes include peri-infectious inflammation (mediated by antibodies, complement, or both) (5,23), vasculopathy, and altered neurotransmission. until definitive neuropathologic studies or effective antiviral therapies are possible, infectious and peri-infectious etiologies need to be examined for neuro-covid. increased levels of csf multiple cytokines in these neuro-covid patients are consistent with earlier reports of cytokine analysis of blood (24; m. woodruff et al., unpub. data). we additionally identified changes shared (and not shared) by sars-cov-2 and hiv. factors associated with increased levels of csf il-10 in patients infected with hiv should be investigated in future neuro-covid studies, and increased levels of csf il-8 might uniquely provide useful information on the pathophysiology of cns. we did not include plasma cytokine levels because their levels are much more influenced by demographic factors than their csf counterparts (w.t. hu et al., unpub. data). a larger cohort is necessary to better distinguish between csf and plasma cytokine alterations, and including patients without confounding disease (e.g., scd in patient 1) or standard mri results can also determine the relative roles of noninfectious/inflammatory causes of encephalopathy, including hypoxia or hypercoagulability (25, 26) . nevertheless, we demonstrated in these case-patients that sars-cov-2 antibodies are detectable in the csf for patients with neurologic complications and are associated with selective csf cytokine alterations. future investigations should align neurologic outcomes with csf infectious and immunologic profiles, such that an evidence-based treatment algorithm can be determined for preventing and treating neuro-covid-19. dr. benameur is an neurologist and associate professor in the department of neurology at emory university school of medicine, atlanta, ga. her primary research interest is in neuroinflammatory changes related to covid-19 asymptomatic patients with novel coronavirus disease (covid-19) clinical characteristics of 113 deceased patients with coronavirus disease 2019: retrospective study possible central nervous system infection by sars coronavirus severe neurologic syndrome associated with middle east respiratory syndrome corona virus (mers-cov) nervous system involvement after infection with covid-19 and other coronaviruses neurologic features in severe sars-cov-2 infection evidence of the covid-19 virus targeting the cns: tissue distribution, host-virus interaction, and proposed neurotropic mechanisms severe acute respiratory syndrome coronavirus infection causes neuronal death in the absence of encephalitis in mice transgenic for human ace2 interleukin 9 alterations linked to alzheimer disease in african americans epidemiology of hospital admissions with influenza during the 2013/2014 northern hemisphere influenza season: results from the global influenza hospital surveillance network antiviral activities and cellular toxicities of modified 2′,3′-dideoxy-2′,3′-didehydrocytidine analogues linked csf reduction of phosphorylated tau and il-8 in hiv associated neurocognitive disorder csf cytokines in aging, multiple sclerosis, and dementia a first case of meningitis/encephalitis associated with sars-coronavirus-2 hematological findings and complications of covid-19 toxic and acquired metabolic encephalopathies: mri appearance immune regulation of antibody access to neuronal tissues detection of sars-cov-2 in different types of clinical specimens how many patients with anti-jev igm in cerebrospinal fluid really have japanese encephalitis? importance of cerebrospinal fluid investigation during dengue infection in brazilian amazonia region detection of human parvovirus 4 dna in the patients with acute encephalitis syndrome during seasonal outbreaks of the disease in gorakhpur long-term follow-up after treatment of rabies by induction of coma human coronaviruses and other respiratory viruses: underestimated opportunistic pathogens of the central nervous system? viruses clinical features of patients infected with 2019 novel coronavirus in wuhan difference of coagulation features between severe pneumonia induced by sars-cov2 and non-sars-cov2 vaso-occlusive crisis and acute chest syndrome in sickle cell disease due to 2019 novel coronavirus disease (covid-19) key: cord-352687-gncmygda authors: science, michelle; maguire, jonathon l.; russell, margaret l.; smieja, marek; walter, stephen d.; loeb, mark title: low serum 25-hydroxyvitamin d level and risk of upper respiratory tract infection in children and adolescents date: 2013-05-15 journal: clinical infectious diseases doi: 10.1093/cid/cit289 sha: doc_id: 352687 cord_uid: gncmygda background. vitamin d may be important for immune function. studies to date have shown an inconsistent association between vitamin d and infection with respiratory viruses. the purpose of this study was to determine if serum 25-hydroxyvitamin d (25(oh)d) was associated with laboratory-confirmed viral respiratory tract infections (rtis) in children. methods. serum 25(oh)d levels were measured at baseline and children from canadian hutterite communities were followed prospectively during the respiratory virus season. nasopharyngeal specimens were obtained if symptoms developed and infections were confirmed using polymerase chain reaction. the association between serum 25(oh)d and time to laboratory-confirmed viral rti was evaluated using a cox proportional hazards model. results. seven hundred forty-three children aged 3–15 years were followed between 22 december 2008 and 23 june 2009. the median serum 25(oh)d level was 62.0 nmol/l (interquartile range, 51.0–74.0). a total of 229 participants (31%) developed at least 1 laboratory-confirmed viral rti. younger age and lower serum 25(oh)d levels were associated with increased risk of viral rti. serum 25(oh)d levels <75 nmol/l increased the risk of viral rti by 50% (hazard ratio [hr], 1.51; 95% confidence interval [ci], 1.10–2.07, p = .011) and levels <50 nmol/l increased the risk by 70% (hr, 1.67; 95% ci, 1.16–2.40, p = .006). conclusions. lower serum 25(oh)d levels were associated with increased risk of laboratory-confirmed viral rti in children from canadian hutterite communities. interventional studies evaluating the role of vitamin d supplementation to reduce the burden of viral rtis are warranted. viral upper respiratory tract infections (rtis) are very common worldwide. despite their perceived benign nature, the burden of disease is significant in terms of morbidity and economic loss [1] . viral respiratory infections may also be associated with mortality in certain patient populations [2, 3] . unfortunately, treatment remains unsatisfactory and is often focused on symptom relief. as a result, prevention remains a key strategy in reducing the burden of these infections. there has been increasing interest in the role of vitamin d in respiratory infections. vitamin d has been shown to have important roles for both innate [4] [5] [6] and adaptive immune responses [7] [8] [9] . in particular, vitamin d has been linked to innate immune responses in lung epithelial cells [10, 11] . in addition, antimicrobial peptides induced by vitamin d may have antiviral effects with activity shown against herpes simplex virus type 1 [12] , adenovirus [13] , human immunodeficiency virus (hiv) [14] , and vaccinia virus [15] . several observational studies have evaluated the role of serum 25-hydroxyvitamin d (25(oh)d) concentration in respiratory tract infections [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] . however, all these studies have important limitations including short follow-up period [16] , small sample size [16-20, 24, 25] , case-control design [18] [19] [20] , or crosssectional design with retrospective ascertainment of symptoms [21, 22] and failure to obtain laboratory confirmation of self-reported illness [16, [21] [22] [23] . the studies in adults have shown an association between lower vitamin d levels and increased respiratory infections (self-reported) [21, 22, 24] and absence from work due to respiratory symptoms [23] . however, pediatric studies have focused predominantly on lower rtis (chest radiography-confirmed pneumonia or bronchiolitis) [17] [18] [19] [20] 25] . the relationship between serum 25(oh)d concentration and upper rtis in children has not been evaluated. the purpose of this study was to determine if serum 25(oh) d levels are associated with subsequent risk of laboratoryconfirmed viral upper rtis in children and adolescents. we conducted a prospective cohort study of children and adolescents participating in a cluster randomized controlled trial evaluating the effect of influenza vaccination of children on viral infection rates in hutterite communities, the results of which have been published elsewhere [26] . hutterite communities (colonies) are rural, self-governing, and self-sufficient communal-living groups of anabaptists. in the rct, children with no underlying chronic medical conditions between 3 and 15 years of age (n = 947) from 46 colonies were randomly assigned by colony to receive either inactivated seasonal influenza vaccine ( children with underlying conditions (n = 65) and other children (n = 174) were followed but not randomized. all participants were followed regularly with twice-weekly assessments by research nurses over the influenza season, defined by the start date (1 laboratory-confirmed influenza case for 2 consecutive weeks) and stop date (no laboratory-confirmed influenza cases for 2 consecutive weeks). this period was from 28 december 2008 to 23 june 2009. covariables of interest were age, sex, presence of asthma, presence of other underlying conditions, influenza vaccination, and 25(oh)d level (nmol/l). serum 25(oh)d levels were collected on children at baseline unless bloodwork was refused. serum from venous blood was frozen at −80°c until batched analysis was performed according to the manufacturer's instructions using the diasorin liaison chemiluminescence assay. serum 25(oh)d levels were taken between 16 october 2008 and 28 april 2009 with most taken between 16 october 2008 and 31 december 2008 (n = 724, 97%). the primary outcome was laboratory-confirmed viral infection defined by a positive nasopharyngeal swab polymerase chain reaction (pcr) result. copan flocked nasopharyngeal swabs (copan italia, brescia, italy) were collected in universal transport medium (copan italia) if 2 or more of the following symptoms were present: fever (≥38°c), cough, nasal congestion, sore throat, headache, sinus problems, muscle aches, fatigue, ear ache or infection, or chills. specimens were first tested for influenza a (including ph1n1) and b using the centers for disease control and prevention human influenza virus real-time reverse transcription pcr detection and characterization panel [27] . negative specimens were then tested for influenza (a, b), coronavirus (229e, nl63, oc43), enterovirus (including rhinovirus), parainfluenza (1-4), respiratory syncytial virus (rsv) (a, b), and human metapneumovirus by xtag respiratory virus panel multiplex pcr (luminex, austin, texas). survival analysis was used to assess the relationship between time to laboratory-confirmed viral respiratory infection and serum 25(oh)d level. univariable analyses were conducted to obtain unadjusted hazard ratios (hrs), and significance was determined using the log-rank test. vitamin d levels were analyzed as both a continuous variable (log-transformed to correct positive skew) and dichotomized based on both the american academy of pediatrics (aap [≥50 nmol/l]) [28] and canadian paediatric society (cps [≥75 nmol/l]) [29] recommendations. age was also analyzed both as a continuous variable and categorized into 3 groups (<5 years, 5-9 years, 10-15 years). a cox proportional hazards model was used to estimate adjusted hazard ratios (ahrs), and the model was adjusted for clustering at the colony level. variables with a p value < .1 were considered for inclusion in the multivariable model, and the final model was determined using a stepwise backwards elimination method. it was decided a priori to adjust the final model for age and sex. the proportional hazards assumption was evaluated using shoënfeld residual test, graphically using shoënfeld residuals and log-log curves and examination of variables for time dependence. overall model fit was assessed using cox-snell residuals and deviance residual plots. a recurrent events analysis was conducted to examine the relationship between 25(oh)d and rate of occurrence of respiratory tract infection adjusted for the biologically plausible covariables specified above. a counting process model was used to treat recurrent events as identical. the model was adjusted for clustering at the colony level. all estimates are presented with 95% confidence intervals (cis). p values <.05 was considered statistically significant. all analyses were conducted using stata statistical software release 12 (statacorp, college station, texas). baseline characteristics are summarized in table 1 . of 1186 children (aged ≤15 years) in the rct, 947 were randomized to influenza or hepatitis a vaccine, and 743 (63%) children from 43 colonies had serum 25(oh)d levels and were included in the study. there were no appreciable differences between participants with serum 25(oh)d levels and those without. the mean age was 9. in univariable analyses, age and serum 25(oh)d level were associated with rti (table 2 ). in multivariable analysis accounting for clustering at the colony level, serum 25(oh)d level was independently associated with viral rti (table 2) . for every 1-unit increase in log serum 25(oh)d level (corresponding to a 2.72-fold increase in serum 25(oh)d level), the hazard of developing a respiratory tract infection decreased by 50% (ahr, 0.52; 95% ci, .35-.79, p = .002). when levels were dichotomized based on aap and cps recommendations, levels <50 nmol/l (ahr, 1.67; 95% ci, 1.16-2.40, p = .006) and <75 nmol/l (ahr, 1.51; 95% ci, 1.10-2.07, p = .011), were both associated with increased risk of infection (figures 1 and 2) . age was also independently associated with viral rti in the multivariable analysis. children aged <5 years were at highest risk of rti compared to those aged 5-9 years (ahr, 1.92; 95% ci, 1.26-2.88, p = .002) and 10-15 years (ahr, 2.40; 95% ci, 1.48-3.87, p < .001). there were no interactions found between serum 25(oh)d level and either age and sex. when multiple viral rti events were taken into account, age and log serum 25(oh)d levels were associated with increased rate of occurrence of respiratory tract infection (table 3) . serum 25(oh)d level was an important predictor both as a continuous variable (ahr, 0.57; 95% ci, .39-.83, p = .003) and when dichotomized based on levels <50 nmol/l (ahr, 1.49; 95% ci, 1.11-2.02, p = .008) and <75 nmol/l (ahr, 1.50; 95% ci, 1.11-2.03, p = .009). younger age conferred an increased risk of viral rti with a hazard ratio of 1.45 for every 5-year decrease in age (ahr, 1.45; 95% ci, 1.09-1.94, p = .011). we found a statistically significant association between serum 25(oh)d and laboratory-confirmed viral rtis in children from hutterite communities in canada. lower serum 25(oh)d levels were associated with increased risk of rti after adjusting for age and sex. younger age was also associated with increased risk. this study provides novel information on the effects of vitamin d status on laboratory-confirmed viral upper rtis in children and adolescents. previous studies have shown that children with rickets are at increased risk of lower respiratory tract disease or pneumonia [30] [31] [32] . more recently, several observational studies have shown an association between low serum 25(oh)d levels and risk of lower respiratory tract infection in children in india [17] , bangladesh [20] , and turkey [18] . our study extends these results, suggesting that vitamin d status may also be important for susceptibility to viral upper rtis in children and adolescents, a finding consistent with the adult literature [21] [22] [23] [24] . if confirmed with intervention trials, our findings may have important public health implications given the frequency of viral upper rtis and their associated morbidity and the prevalence of low vitamin d levels. we found that serum 25(oh)d level was an important predictor of rti both as a continuous variable ( per 1-unit change in log 25(oh)d), and also when dichotomized into levels <50 nmol/l (aap recommendations) and levels <75 nmol/l (cps recommendations). therefore, although maintaining levels >50 nmol/l is important to prevent rickets, [33] higher levels may be needed for prevention of viral rti. this was suggested in a recent study involving healthy adults, where a serum 25(oh)d concentration of >38 ng/ml (approximately 95 nmol/l) was associated with a reduced incidence of acute viral respiratory tract infection [24] . further studies in children are needed to determine the optimal level required for immune function. other canadian studies have failed to show an association between vitamin d status and risk for hospitalization for acute lower respiratory infection in children <5 years of age [19, 25] . although differences in vitamin d levels were not found in these studies, a higher proportion of participants with acute lower respiratory infection were found to have vitamin d receptor polymorphisms associated with reduced receptor expression in one study [34] . we did find an association between serum 25(oh)d level and upper rti in children <5 years of age, which may be related to differences in population (urban vs rural, genetic differences such as vitamin d receptor polymorphisms), study design, serum 25(oh)d measurements, or outcome (upper vs lower rti). the role of serum 25(oh)d level and individual respiratory viruses has not been studied. it has been postulated that lower vitamin d levels may explain the seasonal variation in influenza [35] . in the case of rsv infection, studies have shown that in rsvinfected human airway epithelial cells, vitamin d induces ikβα, an nf-kβ inhibitor, in airway epithelium and decreases rsv induction of inflammatory genes [11] . although we were unable to look at serum 25(oh)d level and the risk of individual respiratory viruses, this area warrants further research as serum 25 (oh)d level may impact viruses differently. our study has several limitations. first, we obtained only 1 vitamin d measurement for each participant and therefore do not have the exact level around the time of rti. however, serum 25(oh)d measurements were taken around the same time in october and november and reflect the vitamin d status at the start of the respiratory infection season prior to the development of viral rti. we believe this estimate is more appropriate than estimates taken at the time of illness because the impact of acute viral illness on serum 25(oh)d levels has not been studied. these levels were also taken at a time just after levels tend to peak in the northern hemisphere [36] . therefore, it is unlikely that vitamin d levels would have rebounded by early spring and had an appreciable impact on infections that occurred later in the follow-up period. in addition, 10 participants had serum for 25(oh)d levels obtained after the follow-up period had started. sensitivity analysis removing these estimates also did not affect the results. a third limitation was that data were not collected on sources of vitamin d intake for each individual participant. however, there are no data demonstrating that vitamin d supplementation or the use of fortified foods is part of routine practice in hutterite communities, and therefore we do not feel that this would affect our analysis. finally, this study was conducted in hutterite children and adolescents and may not be generalizable to other populations. studies evaluating other pediatric populations are warranted. we found that children and adolescents with lower vitamin d levels were at increased risk for laboratory-confirmed viral upper rti. current recommendations regarding target serum 25(oh)d levels may be too low to prevent viral upper rti. this study provides evidence in support of future interventional trials examining the efficacy of vitamin d supplementation on viral rtis in children and adolescents. the economic burden of non-influenza-related viral respiratory tract infection in the united states risk factors associated with severe influenza infections in childhood: implication for vaccine strategy s principles and practice of infectious diseases cutting edge: 1,25-dihydroxyvitamin d3 is a direct inducer of antimicrobial peptide gene expression toll-like receptor triggering of a vitamin d-mediated human antimicrobial response functional antagonism between vitamin d3 and retinoic acid in the regulation of cd14 and cd23 expression during monocytic differentiation of u-937 cells immunoregulation by 1,25-dihydroxyvitamin d3: basic concepts modulatory effects of 1,25-dihydroxyvitamin d3 on human b cell differentiation 25-dihydroxyvitamin d3 has a direct effect on naive cd4(+) t cells to enhance the development of th2 cells hunninghake gw. respiratory epithelial cells convert inactive vitamin d to its active form: potential effects on host defense vitamin d decreases respiratory syncytial virus induction of nf-kappab-linked chemokines and cytokines in airway epithelium while maintaining the antiviral state human cathelicidin (ll-37), a multifunctional peptide, is expressed by ocular surface epithelia and has potent antibacterial and antiviral activity cap37-derived antimicrobial peptides have in vitro antiviral activity against adenovirus and herpes simplex virus type 1 the antimicrobial peptide ll-37 inhibits hiv-1 replication selective killing of vaccinia virus by ll-37: implications for eczema vaccinatum vitamin d status in healthy romanian caregivers and risk of respiratory infections association of subclinical vitamin d deficiency with severe acute lower respiratory infection in indian children under 5 association of subclinical vitamin d deficiency in newborns with acute lower respiratory infection and their mothers vitamin d status is not associated with the risk of hospitalization for acute bronchiolitis in early childhood vitamin d status and acute lower respiratory infection in early childhood in sylhet vitamin d status has a linear association with seasonal infections and lung function in british adults association between serum 25-hydroxyvitamin d level and upper respiratory tract infection in the third national health and nutrition examination survey an association of serum vitamin d concentrations < 40 nmol/l with acute respiratory tract infection in young finnish men serum 25-hydroxyvitamin d and the incidence of acute viral respiratory tract infections in healthy adults rosenberg am. vitamin d deficiency in young children with severe acute lower respiratory infection effect of influenza vaccination of children on infection rates in hutterite communities: a randomized trial emergence of a novel swine-origin influenza a (h1n1) virus in humans prevention of rickets and vitamin d deficiency in infants, children, and adolescents vitamin d supplementation: recommendations for canadian mothers and infants case-control study of the role of nutritional rickets in the risk of developing pneumonia in ethiopian children the frequency of nutritional rickets among hospitalized infants and its relation to respiratory diseases presentation and predisposing factors of nutritional rickets in children of hazara division 25-hydroxyvitamin d: functional outcomes in infants and young children vitamin d receptor polymorphisms and the risk of acute lower respiratory tract infection in early childhood epidemic influenza and vitamin d hypovitaminosis d in british adults at age 45 y: nationwide cohort study of dietary and lifestyle predictors key: cord-267237-wbwlfx7q authors: gómez-rial, jose; currás-tuala, maria josé; rivero-calle, irene; gómez-carballa, alberto; cebey-lópez, miriam; rodríguez-tenreiro, carmen; dacosta-urbieta, ana; rivero-velasco, carmen; rodríguez-núñez, nuria; trastoy-pena, rocio; rodríguez-garcía, javier; salas, antonio; martinón-torres, federico title: increased serum levels of scd14 and scd163 indicate a preponderant role for monocytes in covid-19 immunopathology date: 2020-09-23 journal: front immunol doi: 10.3389/fimmu.2020.560381 sha: doc_id: 267237 cord_uid: wbwlfx7q background: emerging evidence indicates a potential role for monocytes in covid-19 immunopathology. we investigated two soluble markers of monocyte activation, scd14 and scd163, in covid-19 patients, with the aim of characterizing their potential role in monocyte-macrophage disease immunopathology. to the best of our knowledge, this is the first study of its kind. methods: fifty-nine sars-cov-2 positive hospitalized patients, classified according to icu or non-icu admission requirement, were prospectively recruited and analyzed by elisa for levels of scd14 and scd163, along with other laboratory parameters, and compared to a healthy control group. results: scd14 and scd163 levels were significantly higher among covid-19 patients, independently of icu admission requirement, compared to the control group. we found a significant correlation between scd14 levels and other inflammatory markers, particularly interleukin-6, in the non-icu patients group. scd163 showed a moderate positive correlation with the time lapsed from admission to sampling, independently of severity group. treatment with corticoids showed an interference with scd14 levels, whereas hydroxychloroquine and tocilizumab did not. conclusions: monocyte-macrophage activation markers are increased and correlate with other inflammatory markers in sars-cov-2 infection, in association to hospital admission. these data suggest a preponderant role for monocyte-macrophage activation in the development of immunopathology of covid-19 patients. emerging evidence from sars-cov-2 infected patients suggests a key role for monocyte-macrophage in the immunopathology of covid-19 infection, with a predominant monocytederived macrophage infiltration observed in severely damaged lungs (1) , and morphological and inflammation-related changes in peripheral blood monocytes that correlate with the patients' outcome (2) . an overexuberant inflammatory immune response with production of a cytokine storm and t-cell immunosuppression are the main hallmarks of severity in these patients (3) . this clinical course resembles viral-associated hemophagocytic syndrome (vahs), a rare severe complication of various viral infections mediated by proinflammatory cytokines, resulting in multiorgan failure and death (4) . a chronic expansion of inflammatory monocytes and over-activation of macrophages have been extensively described in this syndrome (5) (6) (7) . viral-associated hemophagocytic syndrome has been identified as a major contributor to death of patients in past pandemics caused by coronaviruses (8) , including previous sars and mers outbreaks (9) , and currently suggested for sars-cov-2 outbreak (10) . cd14 and cd163 are both myeloid differentiation markers found primarily on monocytes and macrophages, and detection of soluble release of both in plasma is considered a good biomarker of monocyte-macrophage activation (11, 12) . elevated plasma levels of soluble cd14 (scd14) are associated to poor prognosis in vih-infected patients, are a strong predictor of morbidity and mortality (13, 14) , and associated with diminished cd4+-t cell restoration (15) . in addition, soluble cd163 (scd163) plasma levels are a good proxy for monocyte expansion and disease progression during hiv infection (16) . in measles infection, a leading cause of death associated with increased susceptibility to secondary infections and immunosuppression, scd14 and scd163 levels have been found to be significantly higher, indicating an important and persistent monocyte-macrophage activation (17) . we hypothesized that monocytes/macrophages may be an important component of immunopathology associated to sars-cov-2 infection. in this paper, we analyze serum levels of soluble monocyte activation markers in covid-19 patients and their correlation with severity and other inflammatory markers. we recruited 59 patients with confirmed pcr-positive diagnosis of sars-cov-2 infection, classified according to icu admission requirement (n = 22 patients), or non-icu requirement (n = 37), and age-matched healthy individuals (n = 20) as a control group. demographic data, main medication treatment and routine lab clinical parameters including inflammatory biomarkers were collected for all infected patients. leftover sera samples from routine analytical controls were employed for the analysis, after obtaining the corresponding informed consent. time elapsed from hospital admission to sample extraction was also recorded. to determine levels of soluble monocyte activation markers in serum specimens, appropriate sandwich elisa (quantikine, r&d systems, united kingdom) were used following manufacturer indications. briefly, diluted sera samples were incubated for 3 h at room temperature in the corresponding microplate strips coated with capture antibody. after incubation, strips were washed and incubated with the corresponding human antibody conjugate for 1 h. after washing, reactions were revealed and optical density at 450 nm was determined in a microplate reader. concentration levels were interpolated from the standard curve using a four-parameter logistic (4-pl) curvefit in prism8 graphpad software. final values were corrected applying the corresponding dilution factor employed. data are expressed as median and interquartile range. all statistical analyses were performed using the statistical package r. mann-whitney tests were used for comparison between icu and non-icu groups versus healthy controls. pearson's correlation coefficients were used to quantify the association between scd14 and scd163 concentration and other lab parameters in non-icu patients. data outliers, falling outside the 1.5 interquartile range, were excluded from the statistical analysis. the nominal significance level considered was 0.05. bonferroni adjustment was used to account for multiple testing. patients in the icu group showed significant differences when compared to non-icu group in several clinical laboratory parameters: lymphocytes, ferritin, d-dimer, lactate dehydrogenase (ldh), procalcitonin (pct), and interleukin-6 (il-6). the absolute value for circulating monocytes did not show significant differences between groups. however, these values may have been distorted by the use of tocilizumab, an il-6 blocking drug extensively employed in the icu group which interferes with monocyte function. age and time elapsed from admission to sample extraction did not show differences between groups. values are summarized in table 1 . median levels for scd14 in sera from icu patients were 2444.0 (95%ci: 1914.0-3251.0) ng/ml, compared to 2613.0 (95%ci: 2266.0-2991.0) ng/ml in non-icu patients. the healthy control group median value was 1788.0 (95%ci: 1615.0-1917.0) ng/ml. we observed significant statistical differences when comparing infected patients against controls (p-value < 0.0001), however no significant differences were observed between icu and non-icu groups. median levels for scd163 in sera from icu patients were 911.5 (95%ci: 624.7-1167.0) ng/ml, and 910.4 (95%ci: 733.1-1088.0) ng/ml in non-icu patients. the healthy control group value was 495.6 (95%ci: 332.5-600.7) ng/ml. as with scd14, we observed significant differences for values from infected patients compared to control group (p-value < 00001), but no differences between icu and non-icu infected patients. values are summarized in table 2 and figure 1 . we assessed the correlation between scd14 and scd163 levels and time elapsed from hospital admission to sample extraction (figure 2) . we found a significant positive correlation between scd163 levels and time elapsed (r 2 = 0.3246, p-value = 0.0156) we did not observe a significant correlation between scd14 levels and time elapsed from hospital admission to sample extraction. we found significant correlations between scd14 and scd163 levels and several clinical laboratory parameters in infected patients (in these analysis, adjusted significance under bonferrori correction is 0.01), but only in the non-icu group, possibly reflecting an interference of the use of tocilizumab or corticoids in the icu group. levels of scd14 showed a negative correlation with the absolute value of lymphocytes (r 2 = −0.5501, p-value = 0.0005) and a positive correlation with levels of ldh (r 2 = 0.5906, p-value = 0.0001), crp (r 2 = 0.6275, p-value < 0.0001); pct (r 2 = 0.4608, p-value = 0.0091), and ferritin (r 2 = 0.4414, p-value = 0.0090) (figure 3) . no other significative associations were found with other lab parameters. levels of scd163 did not show significant correlation with clinical laboratory parameters (figure 3) . particularly, il-6 also showed significant positive correlation with scd14 (r 2 = 0.6034, p-value = 0.0003) (figure 4) . we analyzed possible interference of different treatments on scd14 and scd163 serum levels for all patients. we found an interference of corticoid treatment on scd14, levels with median values of 2034 (95%ci: 1319-3159) ng/ml for treated group, and values of 2613 (95%ci: 2466-2913) ng/ml for nontreated group. values were significantly lower in corticoid-treated group (p-value = 0.0069) (figure 5) . no impact was found for corticoids on scd163 levels. likewise, hydroxychloroquine and/or tocilizumab were not found to have an impact on scd14 and scd163 serum levels. levels of scd14 and scd163 did not show association with length of hospital stay in both groups. also, these biomarkers did not show association with the number of days of onset of symptoms. we analyzed for possible age-dependence of scd14 and scd163 levels. values did not show association between these biomarker levels and the age of patients. our results show, for the first time, increased levels of scd14 and scd163 in sera from sars-cov-2 infected patients admitted to hospital. we did not observe statistical differences when comparing icu versus non-icu patients. this is probably due to the interference on monocyte function and scd14 levels produced by the use of corticoid treatment in icu patients, as shown here and previously by others (18, 19) . however, levels of scd14 showed a strong correlation with clinical laboratory parameters, including acute phase reactants (ferritin, ldh, c-reactive protein, procalcitonin) and a strong correlation with il-6 levels in the non-icu patient group, where no corticoids treatments were used. hydroxychloroquine and tocilizumab treatment did not show interferences on scd14 and scd163 levels. furthermore, scd163 levels showed a correlation with the time elapsed from hospital admission to sample extraction, suggesting a potential indicator of disease progression. monocytes and macrophages constitute a key component of immune responses against viruses, acting as bridge between innate and adaptive immunity (20) . activation of macrophages has been demonstrated to be pivotal in the pathogenesis of the immunosuppression associated to several viral infections (such as vih, measles), where expansion of specific subsets of monocytes and macrophages in peripheral blood are observed, and considered to be drivers of immunopathogenesis (21) . our results support the hypothesis of a preponderant role for monocytes in sars-cov-2 immunopathology, associated to an overexuberant immune response. increased levels of monocytemacrophage activation markers, and their correlation with other inflammatory biomarkers (particularly il-6), indicate a close relationship between monocyte activation and immunopathology in these patients. inflammatory markers are closely related to severity in covid-19 pathology (22) and selective blockade of il-6 has been demonstrated to be a good therapeutic strategy in covid-19 pathology (23). our results thus suggest that monocyte-macrophage activation can act as driver cells of the cytokine storm and immunopathology associated to severe clinical course of covid-19 patients. further, monitorization of monocyte activity trough these soluble activation markers and/or follow-up of circulating inflammatory monocytes in peripheral blood, could be useful to assess disease progression in the same way as in other viral infections (16) . in addition, our results identify monocyte-macrophage as a good target for the design of therapeutic intervention using drugs that inhibit monocyte-macrophage activation and differentiation. in this sense, anti-gm csf inhibitor drugs, currently under clinical trials for rheumatic and other auto-inflammatory diseases, might provide satisfactory results in covid-19 patients. other drugs targeting monocyte and/or macrophage could also be useful in covid-19, as in other inflammatory diseases (24) . the strategy of inhibiting monocyte differentiation has proved useful in avoiding cytokine storm syndrome after car-t cell immunotherapy (25), suggesting a possible therapeutic application to covid-19 immunopathology (26, 27) . the present study has several limitations, including a relatively low sample size and the interference of corticoids in icu patients' results. however, these preliminary results are strongly suggestive of an important implication of monocytemacrophage in covid-19 immunopathology, as highlighted by the correlations found between these biomarker levels and inflammatory parameters. further studies using broader series are needed to confirm our findings. in summary, our data underscore the preponderant role of monocyte and macrophage immune response in covid-19 immunopathology and provide pointers for future interventions in drug strategies and monitoring plans for these patients. the raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. the studies involving human participants were reviewed and approved by comité de ética de la investigación con medicamentos de galicia (fast-track approval 18-march-2020). written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. the landscape of lung bronchoalveolar immune cells in covid-19 revealed by single-cell rna sequencing. medrxiv covid-19 infection induces readily detectable morphological and inflammation-related phenotypic changes in peripheral blood monocytes, the severity of wich correlate with patient outcome. medrxiv speciality collaboration, covid-19: consider cytokine storm syndromes and immunosuppression virus associated hemophagocytic syndrome cd14(dim)/cd16(bright) monocytes in hemophagocytic lymphohistiocytosis how viruses contribute to the pathogenesis of hemophagocytic lymphohistiocytosis. front immunol recommendations for the management of hemophagocytic lymphohistiocytosis in adults virus-associated hemophagocytic syndrome as a major contributor to death in patients with 2009 influenza a (h1n1) infection is secondary hemophagocytic lymphohistiocytosis behind the high fatality rate in middle east respiratory syndrome corona virus? the pathogenesis and treatment of the 'cytokine storm' in covid-19 soluble cd14 is a nonspecific marker of monocyte activation differential expression of cd163 on monocyte subsets in healthy and hiv-1 infected individuals plasma levels of soluble cd14 independently predict mortality in hiv infection elevated levels of serum-soluble cd14 in human immunodeficiency virus type 1 (hiv-1) infection: correlation to disease progression and clinical events immunologic failure despite suppressive antiretroviral therapy is related to activation and turnover of memory cd4 cells increased monocyte turnover from bone marrow correlates with severity of siv encephalitis and cd163 levels in plasma persistent high plasma levels of scd163 and scd14 in adult patients with measles virus infection modulation of human monocyte/macrophage activity by tocilizumab, abatacept and etanercept: an in vitro study effects of corticosteroids on human monocyte function co-ordinating innate and adaptive immunity to viral infection: mobility is the key soluble cd163, a novel marker of activated macrophages, is elevated and associated with noncalcified coronary plaque in hiv-infected patients correlation analysis between disease severity and inflammation-related parameters in patients with covid-19 pneumonia. medrxiv the cytokine release syndrome (crs) of severe covid-19 and interleukin-6 receptor (il-6r) antagonist tocilizumab may be the key to reduce the mortality gm-csf inhibition reduces cytokine release syndrome and neuroinflammation but enhances car-t cell function in xenografts a strategy targeting monocyte-macrophage differentiation to avoid pulmonary complications in sars-cov2 infection role of monocytes/ macrophages in covid-19 pathogenesis: implications for therapy the authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.copyright © 2020 gómez-rial, currás-tuala, rivero-calle, gómez-carballa, cebey-lópez, rodríguez-tenreiro, dacosta-urbieta, rivero-velasco, rodríguez-núñez, trastoy-pena, rodríguez-garcía, salas and martinón-torres. this is an open-access article distributed under the terms of the creative commons attribution license (cc by). the use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. no use, distribution or reproduction is permitted which does not comply with these terms. key: cord-011360-1n998win authors: zloto, keren; tirosh-wagner, tal; bolkier, yoav; bar-yosef, omer; vardi, amir; mishali, david; paret, gidi; nevo-caspi, yael title: preoperative mirna-208a as a predictor of postoperative complications in children with congenital heart disease undergoing heart surgery date: 2019-11-15 journal: j cardiovasc transl res doi: 10.1007/s12265-019-09921-1 sha: doc_id: 11360 cord_uid: 1n998win major perioperative cardiovascular events are important causes of morbidity in pediatric patients with congenital heart disease who undergo reparative surgery. current preoperative clinical risk assessment strategies have poor accuracy for identifying patients who will sustain adverse events following heart surgery. there is an ongoing need to integrate clinical variables with novel technology and biomarkers to accurately predict outcome following pediatric heart surgery. we tested whether preoperative levels of mirnas-208a can serve as such a biomarker. serum samples were obtained from pediatric patients immediately before heart surgery. mirna-208a was quantified by rq-pcr. correlations between the patient’s clinical variables and mirna levels were tested. lower levels of preoperative mirna-208a correlated with and could predict the appearance of postoperative cardiac and inflammatory complications. mirna-208a may serve as a biomarker for the prediction of patients who are at risk to develop complications following surgery for the repair of congenital heart defects. major perioperative cardiovascular events, such as cardiac arrest, arrhythmia, and pericardial tamponade, are important causes of morbidity in patients undergoing repair of congenital heart disease (chd) [1] . a number of clinical risk indices have been developed, but their predictive power is limited [2] , raising the need for a simple and strongly predictive noninvasive alternative. mirnas are a class of noncoding rnas of approximately 22 nucleotides in length that was shown to be useful biomarkers in various diseases. the posttranslational regulatory roles of mirnas have been demonstrated in almost all physiological processes. unlike other extracellular rna molecules, extracellular mirnas are remarkably stable in serum and, together with their tissue specificity and the ease by which they can be detected and quantified, they have sparked great interest in their use as clinical biomarkers for a wide range of medical states [3] . we had reported that the level of circulating cardiac mirna-208a following surgery can serve as a sensitive biomarker for the postoperative course of pediatric patients with chd undergoing heart surgery for the repair of their defect [4] . in the present study, we hypothesized that preoperative mirna-208a levels in the blood of pediatric patients with chd can predict postoperative complications. all pediatric patients with chd who underwent cardiac surgery at safra children's hospital between 2012 and 2016 and whose legal representative provided informed consent were enrolled. the inclusion criteria were age younger than 21 years and an echocardiographic diagnosis of chd that required cardiac surgery. the laboratory parameters assessed in the study were as follows: troponin at 24 h after surgery, lactate at 0 h, 6 h, 12 h, 24 h, and maximal lactate after surgery, and maximal creatinine, aspartate aminotransferase (ast), and alanine aminotransferase (alt) levels measured during the hospitalization period. the preoperatively measured parameters were aristotle score [5] , invasive or noninvasive ventilator support, oxygen saturation levels, and whether the surgery was elective or nonelective. the surgical parameters were the cardiopulmonary bypass (cpb) and aortic crossclamp (acc) times. the postoperative parameters were invasive and noninvasive ventilator support, length of ventilation, length of hospitalization (loh), need for reintubation, need for extracorporeal membrane oxygenation (ecmo), maximal inotropic support, and number of days during which inotropic support was given. these parameters were divided into two groups in order to examine correlations. the first group contained the parameters that were related to the loh (i.e., invasive and noninvasive ventilation days and the total number of hospitalization days), and they did not account for children who died following surgery. the second group contained all the other parameters and accounted for all the children in the study. the appearance of cardiac and inflammatory complications was the primary outcome. cardiac complications were defined as the need for cardiorespiratory resuscitation, pericardial and/or pleural effusion requiring treatment or drainage, and postoperative arrhythmia (junctional ectopic, supraventricular, and ventricular tachycardias). inflammatory complications were defined as the need for antibiotic treatment. twenty-six patients comprised the group that sustained complications, of whom 19 patients were diagnosed with complications that involved the heart. laboratory parameters, loh, and higher or lower than the median maximal lactate value were the secondary outcomes. all samples were processed within 24 h of surgery. the blood was centrifuged at 4°c for 10 min at 1200g followed by separation of serum. centrifugation was then repeated at 4°c for 10 min at 10000g. rna was extracted from 250 μl serum to which 5.6 × 10 8 copies of cel-mirna39 were added using tri-reagent®-ls (sigma). rna was resuspended in 45 μl h 2 o of which 5 μl was taken for cdna synthesis (taqman® microrna-rt-kit, abi, #4366597) using a specific primer. mirna quantification was performed on a stepone™ pcr (qpcr). reactions (10 μl total) were run in triplicates and consisted of: 5 μl pcr-mix (abi, #ab-4444557), 0.5 μl of either of the taqman mirna assays (abi, #ab-4427975): cel-mirna39 (#000200) or mirna-208a (#000511), and 1.5 μl of the cdna and 3 μl h 2 o. the cycle threshold (ct) values were calculated with the stepone software v2. 3 . δct values are calculated by subtracting the ct of mirna-208a from the ct of cel-mirna39 of the same sample. the δct values are inversely correlated with the amount of template mirna present in the reaction. the rq was analyzed using the δδct method. categorical variables were described as frequency and percentage. continuous variables were evaluated for normal distribution using histograms and q-q plots. the normally distributed continuous variable was described as mean and standard deviation (sd), and non-normally distributed continuous variables as median and interquartile range (iqr). continuous variables were compared between patients with and without complications using an independent sample t test. correlations between continuous variables were evaluated using the spearman correlation coefficient. univariate and multivariate logistic regressions were used to evaluate the association between the mirna expression and the complications. the multivariate models were performed twice. the model included age and aristotle as confounders. the area under the receiver operating characteristic (roc) curve was used to evaluate the discrimination ability of the mirna. odds ratios (or) with 95% confidence intervals (ci) were reported. all statistical tests were 2-tailed, and p < 0.05 was considered statistically significant. all calculations were performed using spss (ver. 25.0). all statistical analyses were performed on three groups of patients: (1) the entire cohort, (2) patients with oxygen saturations below 90%, and (3) patients with oxygen saturations above 90%. the protocol of this study was approved by the institutional review board of the chaim sheba medical center. informed consent was obtained from the legal representatives of all subjects. seventy-nine consecutive children who underwent cardiac surgery for repair of congenital anomalies at safra children's hospital between 2012 and 2016 were enrolled. there were 34 (43%) females and 45 (57%) males. the characteristics of the population are shown in table 1 and the surgical procedures are listed in table 2 . the surgical characteristics, the laboratory parameters, and the preoperative and postoperative characteristics of the study patients are summarized in tables 3, 4 , and 5. the expression of mirna-208a was measured immediately before the beginning of the operation in all the patients. the patients were divided into two groups: those that did not sustain postoperative complications and those that did. the relative levels of expression of mirna-208a were 3.7 times higher in the group of children with an uncomplicated postoperative course (p = 0.03) (fig. 1) . logistic regression analysis showed an association between the preoperative levels of mirna208a and the risk of developing complications: the lower the level of mirna-208a in the patient's blood before the operation, the higher was the risk for developing complications following surgery (crude or 1.16; 95% ci 1.01-1.33; p = 0.03). this association remained significant after adjusting for the child's age and aristotle score (adjusted or 1.14; 95% ci 0.99-1.32; p = 0.05) ( table 6) . preoperative levels of mirna-208a were studied as predictors of postoperative complications using an roc curve and the area under the curve (auc). the result of this analysis reached a level of significance, with an auc of 64% (95% ci 51.1-77.0%; p = 0.04) (fig. 2) . there was a significant (p = 0.03) inverse correlation between the amount of mirna-208a before surgery and the aristotle score. there was also a correlation (p = 0.01) between preoperative low oxygen saturation values with a shorter loh. we hypothesized that mirna-208a may have better prediction performances in a more homogenous group of patients in relation to their oxygen saturation levels. to test that hypothesis, we divided our cohort into two groups: patients with oxygen saturation levels below 90% (hypoxic) and patients with oxygen saturation levels above 90% (normoxic). we analyzed the data on the preoperative levels of mirna-208a in each group separately, looking for correlations between the amount of mirna-208a and postoperative outcome. the relative levels of expression of mirna-208a before surgery among the children with oxygen saturation levels below 90% (n = 38) were 9.0 times higher for those without postoperative cardiac complications compared with the children who did sustain them (p = 0.01) (fig. 3) . logistic regression calculations in the group of patients with oxygen saturation levels below 90% yielded a statistically significant association between the preoperative level of mirna-208a and the risk of developing postoperative complications: the lower the level of mirna-208a in the patient's blood before the operation, the higher was the risk for developing cardiac complications following surgery (crude or 1.28; 95% ci 1.02-1.60; p = 0.02). table 6 ). the ability of preoperative levels of mirna-208a to predict postoperative cardiac complications in the group of patients with oxygen saturation levels below 90% was studied using an roc curve and the auc. this analysis reached a level of significance, with an auc of 71% (95% ci 54.5-88.1%; p = 0.02) (fig. 4) . a δct of 18.0 can serve as a cutoff value for the prediction of the risk of complications following heart surgery in that group of patients. specifically, δct > 18.0 predicted that the child will sustain complications whereas δct < 18.0 predicted that he/she will not. the sensitivity of this value was 81% and its specificity was 46%. the 41 children who had oxygen saturation levels above 90% had significant inverse correlations between the amount of mirna-208a before the operation and the following postoperative parameters: fewer days of postoperative invasive or noninvasive ventilation (p = 0.05) and fewer days of inotropic support (p = 0.00). in addition, higher levels of mirna-208a correlated with lower lactate values (p = 0.01), with lower creatinine levels (p = 0.00), and with lower aristotle scores (p = 0.00). these results also indicated that higher levels of preoperative mirna-208a correlated with a better outcome in this group (table 7) . dividing the patients with oxygen saturation levels above 90% according to their postoperative lactate values revealed a significant (p = 0.01) difference in their preoperative mirna-208a levels (fig. 5) . patients with higher postoperative lactate values had lower preoperative mirna-208a levels. this result supports the ability of preoperative mirna-208a levels to predict the patient's postoperative outcome. it is also worth noting that dividing this group of patients according to their loh (more or less than the median of 8 days) yielded a substantial, although not significant, difference in the amount of preoperative mirna-208a: specifically, children with a shorter loh had 2.7 times more mirna-208a in serum before their operation than those with longer lohs. the findings of our current investigation revealed that preoperative levels of circulating cardiac mirna-208a in the serum are predictive of the pediatric patient's complications following surgery for life-threatening anatomical malformations associated with chd. to the best of our knowledge, this is the first study to identify a mirna which can serve as a single preoperative biomarker effective in predicting the postoperative outcome in this setting. this is a continuation of our previous study in which we identified that circulating mirna-208a can serve as an accurate biomarker for predicting the risk of developing postoperative complications 6 h following surgery: high levels of mirna-208a in the serum several hours after surgery were indicative of complications which increased the risk of sustaining a worse outcome [4] . the potential benefit of being able to preoperatively identify patients at high risk for complications following heart surgery for such life-threatening conditions is enormous. accurate preoperative risk assessment at an individual patient level enhances clinical decision-making of postoperative patient management and the estimation of associated risks. to date, preoperative prediction of the postoperative course involves the use of the aristotle score which takes into account the complexity of the surgical procedure and predicts 30-day mortality and loh during the first postoperative week. in addition, recent papers have shown the potential benefit of preoperative st2 levels to identify children with increased risk of mortality or readmission after pediatric heart surgery [6, 7] . we were intrigued by the possibility that mirna-208a could also serve as a preoperative biomarker for the child's potential sequelae. our results indeed showed that high expression levels of preoperative circulating mirna-208a were predictive of a better postoperative outcome, as reflected by fewer complications following surgery and, accordingly, were correlated with a low aristotle score and a shorter loh. we hypothesized that the elevated mirna-208a found in the sera of patients before the operation was secondary to the presence of a disturbance in normal physiology and that such patients benefited from preconditioning of the heart, thereby leading to a greater likelihood of an uncomplicated postoperative course. the hearts of those patients were "prepared" before surgery by a yet to be determined mechanism, which enabled them to cope optimally with the stress caused during and immediately after the operation, with the result of fewer postoperative complications. our study is not the first to suggest that a beneficial postoperative outcome is seen in patients whose hearts were exposed to preoperative stress. remote ischemic preconditioning (ripc) was found to be clinically effective in a study by wu et al. on tof children undergoing open heart surgery [8] . those authors showed that ripc attenuated myocardial ischemia/reperfusion (i/r) injury and improved the short-term prognosis of those patients [8] . not surprisingly, mirnas have been reported to mediate such protection of the heart in several studies [9, 10] . the involvement of mirnas in protective mechanisms has also been described in preconditioning of the brain [11] . the source of the high levels of circulating mirna-208a before surgery has yet to be determined. in general, extracellular circulating mirnas originate from two main sources: they can be merely byproducts of cellular activity and cell death [12] , or they can be the result of a selective export system which enables the secreted mirna to be transferred to a recipient cell where it can carry out its role [13] . we and others have shown that the presence of mirna-208a in the blood correlates with damage caused to the heart [4, 14, 15] ; however, there are no studies that define the exact mechanism of the export system. notably, however, circulating mirna-208a in the blood could be a trigger for inducing a heartprotective mechanism whose results become evident following surgery in both of the above-mentioned scenarios. a role for mirna-208a in communicating signals from the heart to other parts of the body has been reported by feng et al., who found that cardiac mirna-208a released into the circulation following myocardial i/r is capable of activating innate [16] . we suggest that because the hearts of the children that were subjected to prolonged physiological stress until the date of the operation had the opportunity to be more prepared, and possibly more protected, from the effects of the operation, those children sustained fewer postoperative complications. although the correlation between the levels of mirna-208a and the postoperative course was more significant in the cyanotic patients, results pointing in the same direction were also obtained for the non-cyanotic ones. this observation strengthens our position that exposing the heart to abnormal physiological constraints, as could be expected in children with low oxygen saturation, produces a stronger "signal" to provoke a protective mechanism, whatever it may be. this is a pilot study which is intended to serve as the basis for a larger investigation designed to examine the value of obtaining preoperative mirna levels for the purpose of guiding decisions regarding surgery and the postoperative management that will need to be provided for a specific patient. in addition to demonstrating the benefit of having such information, our results suggest that mirna-208a may serve as a preoperative target for therapy in children with chd. increasing its levels in the blood of such patients may trigger the preconditioning of their heart and thus contribute to a smoother postoperative course. we are aware of a main limitation of our study that bears mention. our cohort comprises patients with diverse chds which necessarily affect their medical parameters. our attempts to divide the cohort into homogenous groups resulted in groups too small to enable us to reach statistically significant results. we are also aware that our small sample size may be a limitation for our multivariate analysis results. we have shown that the level of preoperative mirna-208a in serum is a reliable biomarker for the prediction of patients who are at risk to develop postoperative complications. unlike other extracellular rna molecules, extracellular mirnas are remarkably stable in serum and, together with their tissue specificity and the ease by which they can be detected and quantified, they have sparked great interest in their use as clinical biomarkers for a wide range of medical states. such a biomarker can help in surgical comorbidity assessment, which is an integral part of patient risk stratification. further studies to assess its applicability to specific cardiac surgeries and to other invasive cardiac procedures should be performed in a larger group of patients. funding information this work was funded with institutional departmental funds. all procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 helsinki declaration and its later amendments. ethical approval was obtained from the ethics committee of the sheba medical center. no animal studies were carried out by the authors for this article. the authors declare that they have no conflict of interest. informed consent informed consent was obtained from all individual participants or their parents/legal guardians included in the study. correlations were calculated for δct values fig. 5 relative average mirna-208 expression in children with oxygen saturation more than 90%. expression levels were quantified by qpcr and the average level of the group of patients for whom lactate maximal levels were above the median (i.e., > 42 mg/dl) was set as 1. values are shown ± sem seminal postoperative complications and mode of death after pediatric cardiac surgical procedures utility of clinical risk predictors for preoperative cardiovascular risk prediction micrornas as biomarkers for clinical studies mirna-208a as a sensitive early biomarker for the postoperative course following congenital heart defect surgery the aristotle comprehensive complexity score predicts mortality and morbidity after congenital heart surgery biomarkers associated with 30-day readmission and mortality after pediatric congenital heart surgery novel biomarkers improves prediction of 365-day readmission after pediatric congenital heart surgery cardiac protective effects of remote ischaemic preconditioning in children undergoing tetralogy of fallot repair surgery: a randomized controlled trial microrna-125b protects against myocardial ischaemia/reperfusion injury via targeting p53-mediated apoptotic signaling and traf6 microrna-144 attenuates cardiac ischemia/reperfusion injury by targeting foxo1. experimental and therapeutic medicine role of micrornas in innate neuroprotection mechanisms due to preconditioning of the brain circulating mir-499 as a potential biomarker for acute myocardial infarction delivery of microrna-126 by apoptotic bodies induces cxcl12-dependent vascular protection analysis of plasma mir-208a and mir-370 expression levels for early diagnosis of coronary artery disease correlation between serum exosome derived mir-208a and acute coronary syndrome extracellular micrornas induce potent innate immune responses via tlr7/myd88-dependent mechanisms publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations acknowledgments we thank amisragas for their continuous support of the department of intensive care. key: cord-022070-soqeje4z authors: parry, christopher m.; peacock, sharon j. title: microbiology date: 2019-05-28 journal: hunter's tropical medicine and emerging infectious diseases doi: 10.1016/b978-0-323-55512-8.00021-1 sha: doc_id: 22070 cord_uid: soqeje4z the management and containment of many treatable and preventable infectious diseases in resource-poor countries is limited by the failure to make an accurate diagnosis. most of the world's population lacks access to accurate, affordable, easy-to-use, quality-assured, reliable, and accessible diagnostic tests and misdiagnosis of infectious diseases is common and compromises patient care. laboratory diagnostics are also needed for the detection and surveillance of the increasing levels of antimicrobial resistance. accurate clinical diagnosis in resource-poor settings relies strongly on the laboratory service, and the need to support the development of a quality-assured laboratory service in such settings is increasingly recognized. international organizations are actively working with local and national providers to improve laboratory services. the development of laboratory services will contribute to improved health for the local population, protection against emerging pathogens, and ensure better use of scarce health care resources. christopher m. parry, sharon j. peacock support. diagnostic algorithms have been developed for situations with no laboratory backup, an approach adopted, for example, in the integrated management of childhood illness (imci). unfortunately, for many infections, clinical features lack sufficient specificity to allow them to be used to differentiate the possible diagnoses, and over-treatment to cover the various possibilities is common. in the assessment of the febrile child in the tropics, for example, malaria and systemic bacterial infections often have an indistinguishable clinical picture. malaria may be diagnosed by smear microscopy, but bloodstream infections require a blood culture service. it has become clear in recent years that bloodstream infections represent an underappreciated burden of disease and mortality. this was clearly demonstrated by a study conducted in kenya in which bacterial bloodstream infections diagnosed by blood culture were responsible for 26% of deaths among children admitted to a rural district hospital. 1 without an accurate diagnosis and specific treatment, bloodstream infections such as those due to salmonella enterica, staphylococcus aureus, streptococcus pneumoniae, or burkholderia pseudomallei can carry a high mortality. distinguishing cerebral malaria, bacterial meningitis, and encephalopathic typhoid may be similarly difficult without laboratory support. children in sub-saharan africa with clinical symptoms of pneumonia may have pneumococcal pneumonia but can equally have malaria or invasive salmonellosis. a child with dysentery may be suffering from amebic colitis, shigella infection, or enterohemorrhagic escherichia coli. in adults, syndromic management of sexually transmitted infections is widespread but needs to be informed by periodic surveillance of antimicrobial susceptibility patterns. emerging and potentially epidemic viral infections such as severe acute respiratory syndrome (sars), influenza (h5n1 and h1n1), ebola, and zika require relatively sophisticated tests to confirm the diagnosis. 2 infections by pathogens that are resistant to multiple antimicrobials are common in many tropical countries where there is widespread availability of over-the-counter antimicrobials. appropriate therapy of these infections requires isolation of the causative organism and antimicrobial susceptibility testing. 3 laboratories also have an important public health role within the health care system. the ability to investigate outbreaks of disease as part of epidemic preparedness is a key function. these might include outbreaks of watery or bloody diarrhea, epidemics of meningitis, or clusters of patients with fever of unknown etiology. in addition, laboratories are a critical component of disease control programs such as the national programs for the control of tuberculosis, hiv, and malaria. the lack of laboratory capacity to support the expansion of diagnostic testing and antiretroviral therapy in hiv programs and in disease outbreaks such as ebola has made many international organizations appreciate the desperate plight of the laboratory service for the first time. 2 tuberculosis can be diagnosed in many patients with a ziehl-neelsen-stained smear of sputum, but to extend the diagnosis in those who are acid-fast bacilli (afb) negative or have multi-drug-resistant (mdr) disease requires more developed laboratory support. furthermore, laboratories have an increasing role in infection control in health care settings and congregate facilities and in the prevention of health care-associated infections. accurate disease surveillance requires a laboratory network and is vital to inform public health policy concerning allocation of resources and disease prevention. laboratories can help to • accurate diagnosis in resource-poor settings is severely limited by the absence of good diagnostic laboratory services. • laboratories in resource-restricted settings struggle with poor facilities, lack of reliable water and electricity, inadequate equipment and consumables, insufficient staff, poor training and low morale, absence of standard operating procedures and quality assurance programs, and inadequate levels of biosafety. • a country plan for the development of a laboratory network requires consideration of the needs at primary, district, provincial/regional, and national levels. • at the district hospital level, a quality-assured repertoire of essential laboratory tests can contribute to improved health care. • surveillance by microbiology laboratories provides an understanding of the causes of infection in the local population and the levels of antimicrobial resistance in key pathogens, and informs public health policy on appropriate antimicrobial therapy and preventive strategies. • there is increasing recognition of the need to support the development of a quality-assured laboratory service in resource-restricted settings and develop simple and robust point-of-care diagnostics both for routine clinical care and outbreak response. • point-of-care rapid diagnostic tests are changing our approach to the diagnosis of some infectious diseases, but care needs to be taken about their usage and interpretation of results. the effective management and containment of many treatable and preventable infectious diseases in resource-restricted countries is limited by the failure to make an accurate diagnosis. access to accurate, affordable, easy-to-use, quality-assured, reliable, and accessible diagnostic tests is severely lacking for most of the world's population, and misdiagnosis of infectious diseases is common. disease identification, appropriate treatment choice, and implementing public health measures for the prevention and control of endemic and epidemic infections all require laboratory support. this lack of reliable diagnostics compromises patient care. laboratory diagnosis also highlights the increasing levels of resistance to antimicrobials in many infections and the need for newer, possibly unaffordable, antimicrobials such as broad-spectrum antimicrobials in bacterial sepsis, or second-line combination therapy for aids, malaria, and tuberculosis. this issue is increasingly recognized and being addressed in many regions. in most resource-restricted settings, individual patient diagnosis is based on clinical signs and symptoms with little or no laboratory be rudimentary so that specimens referred to the next level are not transported in a timely manner and results do not return in a time period that will influence clinical management. it is standard practice in tuberculosis programs that patients who fail treatment should have a sample cultured for tuberculosis so that susceptibility tests can be performed. in a study of the transport of such specimens to the central reference laboratory in malawi, only 40% of specimens arrived in the reference laboratory and only 36% of those samples received were successfully cultured for susceptibility testing. 5 the shortage of staff with appropriate education and training is a further problem. many laboratory workers have no formal training and are simply trained at the bench. at the peripheral level, there may be only one laboratory assistant, with no more than secondary school education. at the district level, there may be assistants and technicians (formally educated in laboratory medicine for 3 years). at the central level, technicians may work alongside technologists (with 2 years specialist post-technician training) and scientists (university science graduates). regardless of qualifications, laboratory workers often have a lowly status within the health sector, and the attrition of health care personnel out of government service results in low morale among those who remain. private or research laboratories may attract the best technicians from the government sector. diagnostic laboratories frequently have no representation at the local, provincial, or national level, or, if they do, it is only as part of the support services. in many countries, the voice of the laboratory is rarely heard. these many problems contribute to a poor biosafety situation in laboratories. the lack of equipment, knowledge, and training means that laboratory workers are processing samples with hazardous pathogens in an unsafe manner. in a study of tuberculosis laboratories in korea, before safety conditions had been upgraded, the relative risk of being diagnosed with tuberculosis for the technicians performing drug susceptibility tests was 21.5 (95% ci 4.5-102.5) compared with non-laboratory workers. 6 the true magnitude of this problem in laboratory workers is difficult to gauge because surveillance of infection in laboratory workers is rarely performed or reported. at a national level, the important contribution of laboratories needs to be appreciated within the ministry of health, by national and local health care managers, and by funding organizations. a representative of the laboratory services should be present in the key decision-making committees. support is also needed from clinicians, who often have disproportionate influence within the system. a plan for the laboratory network should become part of the overall health care development plan. there needs to be a priority list of core and essential services provided in a qualityassured manner. the laboratory plan should include the provision for a tiered laboratory network at the primary, district, regional/ provincial, and national levels. the plans should be realistic, affordable, and sustainable. at the level i or primary level, perhaps in a health post or health center serving outpatients, microscopy for malaria and tuberculosis and testing for hiv with a same-day service would be essential. these laboratories can serve as a collection point for samples that need referral to the next level. the level ii facility in the local district hospital would have a dedicated laboratory space and a broader repertoire of tests serving inpatients and outpatients. the tests offered would depend on the spectrum of local diseases and resources available, and may be limited to microscopy, simple biochemistry and serology, and blood transfusion, or may include bacterial culture facilities. laboratories can act as a hub for the primary-level laboratories, providing them with support, supplies of reagents, and qa activities. at the level iii, define clinical problems by sampling surveys. for example, determining the antimicrobial susceptibilities of bacterial pathogens such as s. aureus, s. pneumoniae, or s. enterica for a selection of isolates can inform the appropriate empiric therapy in a particular area. an understanding of the burden of disease in an area-drugresistant typhoid in an urban slum, for example-could lead to public health measures such as a vaccination program. laboratory surveillance programs may produce the clue to the possibility of new organisms emerging, including both bacteria and viruses, most commonly at the animal-human interface. at for many health care staff working in resource-restricted areas, the major problem is simply a lack of laboratory services. hospital laboratories may be absent, or, if they are available, only offer a limited repertoire of tests. in other areas, particularly in asia, a wide range of alternative services is offered by private diagnostic laboratories, typically outside the front gate of the hospital but with uncertain quality. even when the tests are available, they may not be used or the results ignored. lack of use may stem from a poor perception of the laboratory, and tests may not be available because the costs are prohibitive. even when laboratories are present, they face the many challenges that are familiar to all areas of the health care sector. inadequate facilities are common, with laboratories that lack space and a secure supply of electricity and water. appropriate equipment may be unavailable or poorly maintained. even basic equipment required for a functioning laboratory can be in disrepair because of the absence of regular care and servicing. a functioning microscope is a key piece of equipment for a basic microbiology laboratory but is frequently found in poor condition. in a survey of 90 microscopes in laboratories in nine districts in malawi, only 50% were in good condition. 4 there were 1.1 functioning microscopes per 100,000 population, and even microscopes in need of full servicing were still in daily use. the 90 microscopes were from 16 different manufacturers, illustrating the lack of standardization of laboratory equipment so frequently seen. the provision of biological safety cabinets is another area where equipment from multiple manufacturers and lack of spare parts and maintenance are common, and in this case may lead to unsafe and hazardous conditions for laboratory workers. standardization of equipment and consumables with central ordering, maintenance contracts, and supplies of spare parts would seem a sensible response to this issue but is rarely seen. tests may also be unavailable because of an inadequate supply route for consumables. this is another area where standardization of tests and central ordering and supply can lead not only to more reliable supply of quality-assured consumables but also to potential cost savings for the country. the laboratory can generate results, but the quality may be poor. standard operating procedures may be absent and quality control of routine procedures non-existent. the absence of national or regional laboratory guidelines or programs of external quality assurance (qa) by the laboratory network is common. communications between different levels within the laboratory network may laboratories are further categorized into biosafety levels (bsl) so that the facilities available are matched to the pathogens handled. a standard diagnostic laboratory would be at bsl2, and the basic requirements for such a laboratory are outlined in table 21 .2 and box 21.1. more specialized laboratories such as tuberculosis reference laboratories where culture and susceptibility testing are performed require bsl3 facilities. bsl3 laboratories have particular design features to reduce the hazard of airborne transmission and incorporate directional airflows and the use of biological safety cabinets. they are particularly appropriate for laboratories handling pathogens such as tuberculosis and influenza. however, bsl3 facilities are very expensive and difficult to build and maintain. the who has recently indicated that in some circumstances, slightly less rigorous guidelines, so-called bsl2+ as outlined in table 21 .2, may be appropriate for selected laboratories, for example, processing samples for tuberculosis culture. 8 health care staff working at the district hospital level may be asked to advise on what would constitute an appropriate laboratory service for the hospital and district. the provision of an extensive range of tests is likely to be unaffordable and impractical. in a study evaluating the role of the laboratory in a district hospital in malawi, the services considered essential were blood transfusion (including blood grouping and compatibility testing and screening for hiv, hepatitis b, and syphilis), hemoglobin estimation, and the microscopic diagnosis of malaria and tuberculosis. 9 this list will vary in different areas, and the services of the laboratory should be orientated to the requirements of the district and the available resources. other tests that require relatively little investment and can be done where there are limited resources include microscopy of urine and stool samples for ova, cysts, and parasites; gram stain and cell count in cerebrospinal fluid and other sterile fluids; and gram stains of pus samples. the microscopic appearance of some typical bacterial pathogens is shown in fig. 21 .1a-f. guidelines for standard laboratory methods appropriate for resource-restricted areas are available. 10 a checklist of issues that should be considered when evaluating a diagnostic laboratory is in box 21.2. provincial or regional level, laboratories will be located in larger referral hospitals. laboratories at this level should be performing a more sophisticated range of tests with higher throughput. for example, facilities for tuberculosis culture might be available, together with molecular techniques for specific diseases and the ability to investigate disease outbreaks. support for the level ii laboratories would be an important function, including periodic visits and laboratory assessment as part of a qa program. national reference laboratories at level iv are likely to be located in the capital and serve specialized public health functions that may be linked to specific disease control programs such as the central reference laboratory for the national tuberculosis programme. it is important that laboratories at the national level have links to regional supranational reference laboratories for advice and quality assurance. level iii and iv laboratories would conduct surveillance and monitoring of infections using laboratory data collected throughout the network, establish standard operating procedures and protocols, conduct training and quality improvement, and plan for equipment needs and maintenance throughout the network. biosafety is an essential consideration at all levels of the laboratory network and depends on three principles. 7 good laboratory practice and technique are fundamental and require established standard operating procedures and appropriate induction and training of staff. safety equipment provides a primary barrier, and this includes appropriate, properly maintained and used equipment (e.g., centrifuges, biological safety cabinets) and personal protective equipment (e.g., gloves, respirators). finally, facility design and construction are a secondary barrier providing, for example, appropriate workflows (from clean to dirty areas) and directional airflows and containment if required. microorganisms are categorized into four hazard groups according to their risk to individuals and society and the availability of treatment and preventive measures (table 21. the diagnosis of infection depends on detection of the pathogen or the host response to the pathogen. direct pathogen detection is traditionally performed by light microscopy, although antigen detection and nucleic acid amplification tests (such as polymerase chain reaction [pcr]) are increasingly used. pathogen detection may also be carried out by isolation of the microorganism by culture of relevant clinical samples, and this allows susceptibility testing to be performed. methods based on detecting the immune response mainly rely on detecting pathogen-specific igm or igg antibodies. technological advances in the design of testing methods have simplified antigen and antibody detection to the point that simple point-of-care test kits are now widely available. the rapid kits for hiv antibody detection have an established place in the voluntary counseling and testing framework being established in many countries. rapid malaria detection tests have been recommended as a replacement for malaria microscopy in some guidelines and need to be positive before antimalarial treatment is given. in recent years, organizations such as the unicef/united nations development programme/world bank/who special programme for research and training in tropical diseases (tdr), and the foundation for innovative new diagnostics (find) have played an important role in developing and evaluating new diagnostic tests for many tropical diseases. 11 the who sexually transmitted diagnostics initiative has developed an approach to the characteristics of an ideal diagnostic test in the developingcountry context. "assured" tests should be affordable by those resistance surveillance system manual (e. coli, klebsiella pneumoniae, acinetobacter baumannii, s. aureus, s. pneumoniae, salmonella spp., shigella spp., and neisseria gonorrhoeae), as well as other pathogens of local or national importance. there have also been considerable advances in the format and ease of use of molecular tests. this is exemplified by the increasing use in tuberculosis laboratories of nucleic acid amplification tests directly from afb smear-positive sputum, or from culture isolates. line probe assays (lpas) use a multiplex pcr amplification followed by reverse hybridization to identify mycobacterium tuberculosis complex and mutations in the genes associated with at risk of infection, sensitive and specific, user friendly (simple to perform and requiring minimal training), rapid (to enable treatment at the first visit), robust (does not require refrigerated storage), equipment-free, and able to be delivered to those who need it. there has been increased attention on the problem of antimicrobial resistance for many important pathogens and the critical role that the laboratory plays in the management of this. initiatives have focused on methods and systems of surveillance of antimicrobial resistance in bacterial infections that countries can readily implement. 3, 12 the who guideline has recommended a focus on eight priority pathogens as described in the global antimicrobial rifampicin and isoniazid resistance. lpa can be performed with results in 1 to 2 days, which is considerably quicker than the weeks required for traditional culture methods, and the overall agreement for the diagnosis of mdr between these tests and conventional methods is 99%. the format of these tests is being simplified so that the feasibility of their routine use in tuberculosis reference laboratories in developing countries is becoming a reality. these methods are an important component of the roll-out of the programmatic management of mdr tuberculosis globally. quality assurance is defined as "planned and systematic activities to provide adequate confidence that requirements for quality will be met." the qa system is the basis for a guaranteed result. if this system is not followed, patients may get the wrong results, with important consequences for their health-such as receiving inadequate treatment. a program of qa in diagnostic laboratories involves not only internal quality control and external qa but also attention to appropriate staffing, training and supervision, and maintenance of equipment and facilities. international guidelines are now available and increasingly implemented for qa in many areas of laboratory practice such as afb smear microscopy and hiv testing. accurate clinical diagnosis in resource-restricted settings relies strongly on the laboratory service. the increasing recognition of the need to support the development of a quality-assured laboratory service in such settings is therefore welcome. in many regions, international organizations are actively working with local providers to improve laboratory services. the development of laboratory services will contribute to improved health for the local population and ensure better use of scarce health care resources. bacteremia among children admitted to a rural hospital in kenya diagnostic preparedness for infectious disease outbreaks amr surveillance in low and middle-income settings -a roadmap for participation in the global antimicrobial surveillance system (glass) evaluation of microscope condition in malawi using a bus service for transporting sputum specimens to the central reference laboratory: effect on the routine tb culture service in malawi risk of occupational tuberculosis in national tuberculosis programme laboratories in korea world health organization (who) guidance on bio-safety related to tb laboratory diagnostic procedures the operation, quality and costs of a district hospital laboratory service in malawi medical laboratory manual for tropical countries diagnostics for the developing world world health organization: global antimicrobial resistance surveillance system: manual for early implementation key: cord-034746-uxhpufnv authors: nusshag, christian; stütz, alisa; hägele, stefan; speer, claudius; kälble, florian; eckert, christoph; brenner, thorsten; weigand, markus a.; morath, christian; reiser, jochen; zeier, martin; krautkrämer, ellen title: glomerular filtration barrier dysfunction in a self-limiting, rna virus-induced glomerulopathy resembles findings in idiopathic nephrotic syndromes date: 2020-11-05 journal: sci rep doi: 10.1038/s41598-020-76050-0 sha: doc_id: 34746 cord_uid: uxhpufnv podocyte injury has recently been described as unifying feature in idiopathic nephrotic syndromes (ins). puumala hantavirus (puuv) infection represents a unique rna virus-induced renal disease with significant proteinuria. the underlying pathomechanism is unclear. we hypothesized that puuv infection results in podocyte injury, similar to findings in ins. we therefore analyzed standard markers of glomerular proteinuria (e.g. immunoglobulin g [igg]), urinary nephrin excretion (podocyte injury) and serum levels of the soluble urokinase plasminogen activator receptor (supar), a proposed pathomechanically involved molecule in ins, in puuv-infected patients. hantavirus patients showed significantly increased urinary nephrin, igg and serum supar concentrations compared to healthy controls. nephrin and igg levels were significantly higher in patients with severe proteinuria than with mild proteinuria, and nephrin correlated strongly with biomarkers of glomerular proteinuria over time. congruently, electron microcopy analyses showed a focal podocyte foot process effacement. supar correlated significantly with urinary nephrin, igg and albumin levels, suggesting supar as a pathophysiological mediator in podocyte dysfunction. in contrast to ins, proteinuria recovered autonomously in hantavirus patients. this study reveals podocyte injury as main cause of proteinuria in hantavirus patients. a better understanding of the regenerative nature of hantavirus-induced glomerulopathy may generate new therapeutic approaches for ins. . characteristics of 26 patients with acute hantavirus infection. acr = albumin-to-creatinine ratio, crp = c-reactive protein, dpo = days post onset of first symptoms, gcr = gram creatinine, los = length of hospital stay, max = maximum, min = minimum, pcr = protein-to-creatinine ratio, scr = serum creatinine. bold values are statistically significant for p < 0.05. (fig. 1 ), but the characteristic picture of tubular interstitial nephritis in the renal medulla (fig. 2) . electron microscopy of glomeruli revealed enlarged visceral podocytes, a focal foot process effacement together with a mild thickening of the glomerular basement membrane (gbm) and vacuolization of podocytes in hantavirus patients (fig. 1 , figure s1 ). immune deposits or further ultrastructural changes were not present. mean gbm widths were 367.3 nm (± 69.6), 504.9 nm (± 74.1), 482.5 nm (± 92.6) and 685.2 nm (± 133.8) for the control and hantavirus biopsy samples i, ii and iii, respectively. maximum podocyte foot process width was highest in proteinuric patients i and iii with 2064.2 nm and 2301.0 nm and significantly lower in patient ii and control with 1998.3 nm and 1885.5, respectively. however, due to the focal nature of the foot process effacement, mean podocyte width did not differ between patients and control (table s3) . compared to the control, em analysis of proximal tubular cells showed relevant subcellular lesions indicated by severe apical cytoplasmic vacuolization (fig. 2) . interestingly, these changes were predominantly observed in the two patients with severe proteinuria at the time of biopsy (hantavirus patient i and iii). biomarker levels on admission. on admission, patients suffering from hantavirus infection showed significantly increased urinary nephrin, igg, α1-mg and serum supar levels compared to healthy controls (fig. 3a ). when further dividing hantavirus patients according to the severity of pcr on admission, patients with severe pcr showed significantly higher median nephrin and igg levels compared to patients with moderwww.nature.com/scientificreports/ ate pcr. remarkably, an almost dichotomous distribution of urinary biomarkers of a defective gfb (nephrin and igg) was observed when moderate and severe pcr were compared on admission, indicating substantial differences in permeability of the glomerular slit diaphragm at that specific time point. a trend towards higher supar levels was also seen in patients with severe proteinuria. correlation analyses between serum supar levels, maximum scr and scr levels within the first 48 h showed no significant correlations, whereas a significant positive correlation was found for serum supar levels and levels of urinary nephrin, pcr, acr and igg (table s4 ). the normalization of nephrin and igg levels to urinary creatinine excretion led to similar results ( figure s2 ). in contrast, the previous significant difference of absolute α1-mg levels between patients with moderate and severe pcr disappeared after creatinine normalization ( figure s2 ). though, urinary biomarker levels decreased in both groups over time, patients with severe pcr showed significantly higher levels of nephrin, igg, acr and pcr during the first 48 h after admission ( table 2 ). the greatest absolute differences were seen for urinary biomarkers that indicate a defective glomerular barrier such as nephrin, igg, acr. at the same time, only minor absolute differences were observed for the tubular proteinuria marker α1-mg. when analyzing urinary nephrin concentrations over time in individual patients and in relation to the dpo instead of time after admission, urinary nephrin and pcr levels decreased almost in parallel. interestingly, the start of normalization of urinary nephrin and pcr levels preceded the first decline of scr by 48-72 h. furthermore, in patients with an available urine sample at the time of pcr normalization, the normalization of urinary nephrin levels tended to precede the normalization of pcr levels. figure 3b shows two exemplary biomarker courses of patients with acute hantavirus infection. in a next step, we analyzed the course of dpo-synchronized pcr values between patients with moderate and severe pcr in the entire cohort (table 3) . patients with severe pcr showed significantly higher pcr levels up to dpo 8 compared to patients with moderate pcr, indicating a higher renal disease severity between both groups at the same dpo. due to the self-limiting character of the hantavirus disease and subsequent autonomous recovery of the gfb in both groups, no differences in pcr levels were seen beyond dpo 8 ( table 3 ). the morphology and time course of pcr slopes within both groups www.nature.com/scientificreports/ was comparable, but the origin of the slope started at higher pcr levels in patients with severe proteinuria, especially before dpo 8. analyses showed a strong positive correlation between urinary nephrin levels and pcr, acr, igg, α1-mg and c-reactive protein (crp) levels ( table 4 ). the highest correlation coefficients (r) were thereby achieved between nephrin levels on admission and biomarkers of (non-selective) proteinuria. in contrast, weaker correlations where seen for scr, especially on admission and for maximum levels. furthermore, nephrin levels on admission showed a moderate positive association with the length of hospital stay (los) and a moderate negative association with platelet count and the time of admission in terms of dpo. hemoglobin and leukocytes values showed no relevant correlation with urinary nephrin levels. to our knowledge, this is the first comprehensive study to investigate the role of direct podocyte damage in acute puuv infection in vivo. our data show a strong association between urinary nephrin levels and the extent of (non-selective) glomerular proteinuria, suggesting that hantavirus infection causes a pronounced podocyte damage and subsequent impairment of the gfb. the significant findings in electron microscopy analyses were a focal foot process effacement, podocyte vacuolization and apical tubular vacuolization (indicating massive proteinuria) which all are known as typical histopathological features in ins 16, 17, [31] [32] [33] [34] [35] . however, the pathomechanical role and underlying mechanisms of the observed podocyte vacuolization need further clarification. while differences between patients with moderate and severe proteinuria were preserved for urinary nephrin and igg levels after normalization to urinary creatinine excretion, differences in α1-mg levels were no longer present. this further supports the idea that proteinuria in hfrs is predominantly from glomerular origin. furthermore, when individual patients were analyzed, the normalization of urinary nephrin levels tended to precede the normalization of proteinuria. the highest correlation coefficients (r) were achieved between urinary nephrin levels on admission and pcr, acr and igg levels within 48 h. both observations suggest urinary nephrin levels as an early indicator of gfb dysfunction and a direct pathophysiological connection between the current impairment of gfb integrity by hantavirus infection and the subsequent extent and clinical course of proteinuria. as a further similarity to ins 16, 18 , hantavirus patients showed significantly elevated serum supar levels in comparison to healthy controls. supar levels correlated significantly with urinary nephrin, pcr, acr and igg levels, but not with scr. it was generally believed that hantaviruses predominantly infect endothelial cells, leading to capillary permeability due to a loss of cell-to-cell contacts, but without a direct cytopathic effect 6 . the deregulation of systemic angiopoietin levels 9 and a vascular endothelial growth factor a (vegfa)-induced downregulation of ve-cadherins accompanied by a β3 integrin-mediated dysregulation of vegf receptor 2 (vegf2) are suggested mechanisms 19, [36] [37] [38] . though, these findings may explain major symptoms and complications of hfrs such as capillary leakage and pulmonary edema, the exact cause of (nephrotic) proteinuria is still poorly understood. vascular endothelial barrier dysfunction or tubular damage, as discussed by some authors 13,15 , does not adequately explain the extent of proteinuria. we have recently shown in vitro that hantaviruses additionally infect tubular epithelial cells, glomerular endothelial cells and especially podocytes, leading to disruption of cell-to-cell contacts and impaired intra-cellular integrity with rearrangements of the podocyte cytoskeleton 7, 39, 40 . here, we show for the first time in vivo an interdependent relationship between ultrastructural and functional impairments of the gfb and a direct podocyte damage as indicated by increased nephrin excretion and electron microscopy. in addition, urinary nephrin, pcr, acr and igg levels correlated significantly with serum supar values. to date, one other study showed significantly elevated blood supar levels and their association with hantavirus disease severity but did not include nephrinuria and the extent of proteinuria in their analysis 19 . supar is suggested to interfere with the cross-directional signaling between the glomerular basement membrane (gbm) and podocytes, thereby affecting gfb integrity 16 . in addition, alfano et al. recently showed that supar downmodulates nephrin expression in podocytes in vitro and in vivo 41 . together, these mechanisms may disturb table 3 . comparison of proteinuria in relation to the days post onset of first symptoms. dpo = days post onset of first symptoms, gcr = gram creatinine, pcr = protein-to-creatinine ratio. bold values are statistically significant for p < 0.05. www.nature.com/scientificreports/ table 4 . spearman's correlation of urinary nephrin levels on admission with parameters of hantavirus disease activity. acr = albumin to creatinine ratio, α1-mg = α1-microglobulin, ci = confidence interval, crp = c-reactive protein, dpo = days post onset of first symptoms, igg = immunoglobulin g, r = correlation coefficient, s-alb = serum albumin, scr = serum creatinine. bold values are statistically significant for p < 0.05. www.nature.com/scientificreports/ podocyte-gbm-interaction, where rearrangements of the podocytic cytoskeleton may result in podocyte damage, subsequent foot process effacement and gfb dysfunction with release of intercellular gfb components such as nephrin into the urine 7, 16, 18, 39 . further studies are required to clarify whether supar is a directly involved pathomechanically mediator in hantavirus-induced podocyte injury. nevertheless, a very unique feature in puuv infection is that proteinuria and kidney function usually recover autonomously in all patients 8 , while therapeutic outcomes in ins are variable and autonomous recovery is rare 16, 17 . hantavirus induced proteinuria usually peaks around dpo 5 and normalizes within 1-3 weeks 13 . but, due to a varying hantavirus disease severity, the actual extent of proteinuria differs in patients 13 . the same observation applies to our cohort. patients with severe pcr on admission still showed significantly higher pcr levels compared to patients with moderate pcr, when pcr values were matched based on the dpo instead of the time of hospital admission. scr in turn peaks 4-5 days after the peak of proteinuria 13 . this tempts authors to claim that proteinuria predicts the severity of emerging aki, by showing an association between glomerular proteinuria and maximum scr levels 13 . however, it is generally accepted that scr reflects rapid changes in renal function with a latency of at least 24-72 h (time until a new steady state is reached after a single renal insult) 42 . we therefore hypothesize that the extent of maximum renal impairment is at least in part already present at the time of maximum proteinuria but is indicated with a delay by scr. our results support this hypothesis by showing a stronger correlation between pcr values on admission and scr concentrations after 24-48 h (24 h: r = 0.69, p < 0.0001; 48 h: r = 0.81, p < 0.0001) than for maximum scr levels or scr levels on admission (adm: r = 0.27, p = 0.1837; max: r = 0.56, p = 0.029). other accepted markers of hantavirus disease severity such as leukocyte and platelet count, crp and hemoglobin levels, showed weaker correlations with nephrin. this suggest gfb dysfunction once more as an independent, subcellular disease manifestation of hantavirus infection in addition to capillary leakage and aki. both impairment of renal function and proteinuria must probably be seen as two individual disease manifestations, the extent of each depending on the current hantavirus disease activity. there are study limitations, which need to be addressed. first, kidney biopsy material was only available for three hantavirus patients in our database who were initially suspected to have kidney diseases other than hantavirus infection. routine kidney biopsies could not be justified in terms of a risk-benefit analysis, due to the mostly reliable hantavirus diagnostics by serological tests and the self-limiting disease character with exclusively symptomatic therapy. second, the use of a creatinine-normalized description of proteinuria/albuminuria in hantavirus patients may overestimate total protein excretion relative to 24-h volume measurements because urinary creatinine excretion decreases with aki. however, we are confident that this does not affect the results of our study, as our results were extremely consistent when absolute or creatinine-normalized proteinuria parameters were used in the analyses of glomerular proteinuria. third, the reported minimum or maximum biomarker levels in our study may differ from the actual absolute minimum or maximum values that may have occurred prior to hospitalization. in summary, puuv hantavirus infection shows clinical and ultrastructural similarities to ins, but with the unique feature of an autonomous recovery. this special feature highlights the potential of further comparative studies with ins and other rna-virus induced glomerulopathies in order to improve our understanding of regenerative mechanisms in the context of gfb dysfunctions and potential future therapeutic approaches. study design and patient population. this retrospective study was conducted at the department of nephrology at heidelberg university hospital and approved by the local ethics committee of the medical faculty of heidelberg. all patients with acute hantavirus infection hospitalized in 2017 were included for further analyses. in total, 26 patients without pre-existing kidney diseases and with serologically proven, acute puuv infection were analyzed. 18 age and gender matched volunteers served as controls (table s1 ). written informed consent was obtained from all participants. all methods or experiments were performed in accordance with relevant guidelines and regulations. the overall median creatinine-normalized proteinuria (protein-to-creatinine ratio, pcr) at the time of admission was used to categorize patients in two disease severity groups (moderate pcr vs. severe pcr): (a) moderate pcr (≤ 2485 mg/g creatinine, gcr) and (b) severe pcr (> 2485 mg/gcr). baseline serum creatinine (scr) was defined as the lowest value between 6 months before or after an acute hantavirus infection. maximum pcr or scr values were defined by the highest value measured during hospitalization or recorded in medical reports immediately prior to hospitalization. data collection and laboratory methods. patient characteristics and presented laboratory parameters such as scr, urinary albumin-to-creatinine ratio (acr) and pcr were obtained from medical records. urine nephrin, igg, α1-mg and serum supar levels were measured retrospectively. nephrin and supar levels were quantified by enzyme-linked immunosorbent assay (elisa) as instructed by the manufacturer (human nphn antibody elisa kit, elabscience biotech co. ltd, wuhan, china; supar elisa kit, r&d systems, minneapolis, mn, usa). urinary igg and α1-mg concentrations were measured in the accredited central laboratory of the heidelberg university hospital. light and electron microscopy. three biopsy samples of patients with acute hantavirus infection were analyzed. biopsy samples (heidelberg biopsy register) were fixed in glutaraldehyde and embedded in epon-araldite after post-fixation with osmium tetroxide and analyzed by transmission electron microscopy (jem-1400, jeol, freising, germany). biopsy material from a time-zero biopsy of a living kidney allograft served as control. all analyses were performed at the institute of pathology (heidelberg university hospital, germany). the gbm width was evaluated using 20 representative measurements per patient. to account for the focal nature scientific reports | (2020) 10:19117 | https://doi.org/10.1038/s41598-020-76050-0 www.nature.com/scientificreports/ acute kidney injury in critically ill patients with covid-19 acute kidney injury in patients hospitalized with covid-19 multiorgan and renal tropism of sars-cov-2 renal histopathological analysis of 26 postmortem findings of patients with covid-19 in china kidney disease is associated with in-hospital death of patients with covid-19 uncovering the mysteries of hantavirus infections virus-and cell type-specific effects in orthohantavirus infection clinical course and long-term outcome of hantavirus-associated nephropathia epidemica deregulation of levels of angiopoietin-1 and angiopoietin-2 is associated with severe courses of hantavirus infection cardiopulmonary involvement in puumala hantavirus infection renal biopsy findings and clinicopathologic correlations in nephropathia epidemica young man with kidney failure and hemorrhagic interstitial nephritis glomerular proteinuria predicts the severity of acute kidney injury in puumala hantavirus-induced tubulointerstitial nephritis increased glomerular permeability in patients with nephropathia epidemica caused by puumala hantavirus proteinuria and the clinical course of dobrava-belgrade hantavirus infection molecular stratification of idiopathic nephrotic syndrome minimal change disease and idiopathic fsgs: manifestations of the same disease circulating urokinase receptor as a cause of focal segmental glomerulosclerosis plasma levels of soluble urokinase-type plasminogen activator receptor associate with the clinical severity of acute puumala hantavirus infection soluble urokinase plasminogen activator receptor (supar) as an early predictor of severe respiratory failure in patients with covid-19 pneumonia soluble urokinase receptor is a biomarker of cardiovascular disease in chronic kidney disease serum level of soluble urokinase-type plasminogen activator receptor is a strong and independent predictor of survival in human immunodeficiency virus infection collapsing glomerulopathy in hiv and non-hiv patients: a clinicopathological and follow-up study soluble urokinase receptor and the kidney response in diabetes mellitus soluble urokinase plasminogen activator receptor is a predictor of incident non-aids comorbidity and all-cause mortality in human immunodeficiency virus type 1 infection modification of kidney barrier function by the urokinase receptor what is the role of soluble urokinase-type plasminogen activator in renal disease? hantavirus infection with severe proteinuria and podocyte foot-process effacement electron microscopy of nephropathia epidemica. renal tubular basement membrane urine podocyte mrnas, proteinuria, and progression in human glomerular diseases scanning electron microscopy of the nephrotic kidney diagnostic and prognostic significance of glomerular epithelial cell vacuolization and podocyte effacement in children with minimal lesion nephrotic syndrome and focal segmental glomerulosclerosis: an ultrastructural study podocytes undergo phenotypic changes and express macrophagic-associated markers in idiopathic collapsing glomerulopathy podocyte autophagy is associated with foot process effacement and proteinuria in patients with minimal change nephrotic syndrome in nephrotic syndromes podocytes synthesize and excrete proteins into the tubular fluid: an electron and ion microscopic study elevated vegf levels in pulmonary edema fluid and pbmcs from patients with acute hantavirus pulmonary syndrome andes virus disrupts the endothelial cell barrier by induction of vascular endothelial growth factor and downregulation of ve-cadherin hantaviruses direct endothelial cell permeability by sensitizing cells to the vascular permeability factor vegf, while angiopoietin 1 and sphingosine 1-phosphate inhibit hantavirus-directed permeability motility of human renal cells is disturbed by infection with pathogenic hantaviruses pathogenic old world hantaviruses infect renal glomerular and tubular cells and induce disassembling of cell-to-cell contacts full-length soluble urokinase plasminogen activator receptor down-modulates nephrin expression in podocytes issues of acute kidney injury staging and management in sepsis and critical illness: a narrative review christian nusshag was funded by the physician scientist programme of heidelberg faculty of medicine. author contributions n.c. conceived the study design, was responsible for acquisition, analysis and interpretation of data and drafted the article. s.a. assisted with acquisition, analysis and interpretation of data and assisted in drafting the article. h.s., s.c., k.f., e.c., b.t., w.m.a. and m.c. contributed to the acquisition and interpretation of data and revised the article critically. z.m., r.j. and k.e. contributed to the conception of the study, assisted with analysis and interpretation of data and revised the article critically. all authors approved the final version of the article for publication. open access funding enabled and organized by projekt deal. reiser j. is a co-founder and shareholder of trisaq, a biotechnology company developing therapies for renal diseases. the authors declare no competing interests. supplementary information is available for this paper at https ://doi.org/10.1038/s4159 8-020-76050 -0.correspondence and requests for materials should be addressed to c.n. publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.open access this article is licensed under a creative commons attribution 4.0 international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. key: cord-005633-oyhpwut7 authors: oppert, michael; reinicke, albrecht; gräf, klaus-jürgen; barckow, detlef; frei, ulrich; eckardt, kai-uwe title: plasma cortisol levels before and during "low-dose" hydrocortisone therapy and their relationship to hemodynamic improvement in patients with septic shock date: 2000-11-18 journal: intensive care med doi: 10.1007/s001340000685 sha: doc_id: 5633 cord_uid: oyhpwut7 objectives: to compare cortisol levels during "low-dose" hydrocortisone therapy to basal and acth-stimulated endogenous levels and to assess whether clinical course and the need for catecholamines depend on cortisol levels and/or pretreatment adrenocortical responsiveness. design and setting: prospective observational study in a medical icu of a university hospital. patients: twenty consecutive patients with septic shock and a cardiac index of 3.5 l/min or higher, started on "low-dose" hydrocortisone therapy (100 mg bolus, 10 mg/h for 7 days and subsequent tapering) within 72 h of the onset of shock. measurements and results: basal total and free plasma cortisol levels ranged from 203 to 2169 and from 17 to 372 nmol/l. in 11 patients cortisol production was considered "inadequate" because there was neither a response to acth of at least 200 nmol/l nor a baseline level of at least 1000 nmol/l. following the initiation of hydrocortisone therapy total and free cortisol levels increased 4.2and 8.5-fold to median levels of 3587 (interquartile range 2679–5220) and 1210 (interquartile range 750–1846) nmol/l on day 1, and thereafter declined to median levels of 1310 nmol/l and 345 nmol/l on day 7. patients with "inadequate" steroid production could be weaned from vasopressor therapy significantly faster, although their plasma free cortisol concentrations during the hydrocortisone treatment period did not differ. conclusions: (a) during proposed regimens of "low-dose" hydrocortisone therapy, initially achieved plasma cortisol concentrations considerably exceed basal and acth stimulated levels. (b) cortisol concentrations decline subsequently, despite continuous application of a constant dose. (c) "inadequate" endogenous steroid production appears to sensitize patients to the hemodynamic effects of a "therapeutic rise" in plasma cortisol levels. abstract objectives: to compare cortisol levels during ªlow-doseº hydrocortisone therapy to basal and acth-stimulated endogenous levels and to assess whether clinical course and the need for catecholamines depend on cortisol levels and/or pretreatment adrenocortical responsiveness. design and setting: prospective observational study in a medical icu of a university hospital. patients: twenty consecutive patients with septic shock and a cardiac index of 3.5 l/min or higher, started on ªlow-doseº hydrocortisone therapy (100 mg bolus, 10 mg/h for 7 days and subsequent tapering) within 72 h of the onset of shock. measurements and results: basal total and free plasma cortisol levels ranged from 203 to 2169 and from 17 to 372 nmol/l. in 11 patients cortisol production was considered ªinadequateº because there was neither a response to acth of at least 200 nmol/l nor a baseline level of at least 1000 nmol/l. following the initiation of hydrocortisone therapy total and free cortisol levels in-creased 4.2-and 8.5-fold to median levels of 3587 (interquartile range 2679±5220) and 1210 (interquartile range 750±1846) nmol/l on day 1, and thereafter declined to median levels of 1310 nmol/l and 345 nmol/l on day 7. patients with ªinadequateº steroid production could be weaned from vasopressor therapy significantly faster, although their plasma free cortisol concentrations during the hydrocortisone treatment period did not differ. conclusions: (a) during proposed regimens of ªlow-doseº hydrocortisone therapy, initially achieved plasma cortisol concentrations considerably exceed basal and acth stimulated levels. (b) cortisol concentrations decline subsequently, despite continuous application of a constant dose. (c) ªinadequateº endogenous steroid production appears to sensitize patients to the hemodynamic effects of a ªtherapeutic riseº in plasma cortisol levels. adrenal insufficiency per se can lead to a high output circulatory failure that resembles septic shock [3] . in addition, animal experiments [4] and clinical observations [5, 6, 7, 8] have shown that diminished steroid levels during sepsis are associated with adverse prognosis. in experimental settings in humans the administration of hydrocortisone before or during endotoxin challenge significantly reduces the inflammatory response [9] . on the other hand, some studies have shown that mortality is higher with increased levels of cortisol in patients with sepsis and septic shock [10, 11] . a recent prospective cohort study in 189 patients with septic shock found that patients with high cortisol levels and a reduced response to adrenocorticotropic hormone (acth, corticotropin) have the worst outcome [12] . several attempts to improve the outcome of septic patients with high doses of steroids (approx. 2±8 g methylprednisolone in 24 h) have failed [13, 14, 15, 16, 17] . one meta-analysis concluded that steroids at such doses may even be harmful since they appear to increase the mortality in patients with overwhelming infection [16] . in contrast, two open studies [18, 19] and two recent randomized controlled trials [20, 21] have shown ªlowdoseº steroid therapy to have beneficial effects on hemodynamics and outcome in patients with septic shock, using no more than 300 mg hydrocortisone daily administered either as bolus injections of 100 mg three times daily or as continuous infusion. this work greatly stimulated the interest in using corticosteroids for septic patients, but little is known about the cortisol concentrations achieved by such a ªlow-doseº regimen and their relationship to endogenous basal and acth-induced cortisol levels. in fact, a number of studies have considered similar moderate amounts of hydrocortisone as a ªsupraphysiologicalº [18, 20] , a ªphysiologicalº [19] , a ªreplacementº [22] , or a ªstressº dose [21] . moreover, it remains uncertain whether the response to hydrocortisone depends on endogenous steroid secretion. while some investigators believe that hemodynamic improvement after moderate doses of hydrocortisone detects adrenal insufficiency [18] , a recent randomized trial by bollaert et al. [20] suggested that the effectiveness of hydrocortisone is unrelated to adrenocortical function. to obtain further insight into the pharmacological and pathophysiological basis of the proposed ªlowdoseº steroid treatment in septic patients we performed a prospective observational study in 20 medical intensive care patients with septic shock receiving continuous hydrocortisone therapy (10 mg/h). total and free cortisol levels were determined under basal conditions, after acth stimulation, and during the subsequent course of hydrocortisone therapy to compare the levels under therapy with the endogenous production and to assess whether clinical course and hemodynamic response dif-fer in patients with and without ªinadequateº endogenous production. adult patients with septic shock were included who were being treated in the medical intensive care unit of the virchow klinikum. demographic data, the focus of sepsis, and causative organisms are presented in table 1 . according to the american college of chest physicians/society of critical care medicine consensus committee [1] septic shock was defined as sepsis with hypotension of 90 mmhg or less or a drop of 40 mmhg or more despite adequate fluid resuscitation along with the presence of perfusion abnormalities. furthermore, patients were only included with evidence of a causative organism or focus of infection and a cardiac index 3.5 l min ±1 m ±2 or greater. patients with pancreatitis, infection with human immunodeficiency virus, and those in whom withholding maximal therapy was considered were excluded. informed consent was obtained from next of kin. all patients were monitored hemodynamically with an arterial, a central venous, and a swan-ganz pulmonary artery catheter. after inclusion in the study dopamine, when given in daily doses above 240 mg, was switched to noradrenaline and dobutamine when possible depending on cardiac output and peripheral vascular resistance in order to make the needed doses of vasopressors comparable. tapering of catecholamines and fluid expansion was guided by hemodynamic and pulmonary function to optimize tissue perfusion and gas exchange. vasopressor therapy was usually titrated to achieve a mean arterial pressure of 70 mmhg or higher and a cardiac index of 3 l min ±1 m ±2 . the pulmonary artery occlusion pressure was aimed to be between 15 and 18 mmhg. within 72 h following the onset of septic shock a ªshortº corticotropin test was performed in all patients [23, 24] . for this plasma samples were drawn before and 30 min after the administration of 0.25 mg of 1,24-corticotropin (synacthen; ciba, switzerland; corresponding to 25 iu acth) for measurement of basal and stimulated cortisol levels. cortisol production was defined as ªadequateº when the baseline level was greater than 500 nmol/l (to convert values for cortisol to mg/dl, divide by 27.5) and the increase after acth was greater than 200 nmol/l, or the baseline level was already above 1000 nmol/l (see ªdiscussionº). following the corticotropin test all patients were given a bolus injection of 100 mg hydrocortisone (pharmacia & upjohn, germany), followed by a continuous infusion. this infusion was given at a rate of 10 mg/h for 7 days, was reduced to 6 mg/h on day 8, and then reduced by 2 mg/h per day until it was discontinued on day 10. plasma samples for determination of plasma cortisol and transcortin levels were drawn daily between 6 and 8 a.m. measurements were performed en bloc, and the results were therefore not available to the physicians treating the patients. plasma cortisol was measured by solid-phase radioimmunoassay (biermann, bad nauheim, germany). plasma transcortin was de-termined using a competition radioimmunoassay (medgeniox diagnostics, fleurus, belgium). the concentration of free cortisol was calculated from total cortisol and transcortin concentrations using the following formula: free cortisol (mmol/l) = (z 2 +0.00128 c±z) ±1/2 , with z = 0.0167+0.182 (t±c) [c = total cortisol (mmol/l); t = transcortin (mmol/l)] [25] . for comparison of differences between groups the mann-whitney u test was used. fisher's exact test was used to test for dependencies between groups. a p value less than 0.05 was considered significant. all statistics were computed using spss for windows, version 7.0. unless otherwise indicated values are presented as medians with the interquartile range in parenthesis. basal and stimulated cortisol levels of the 20 patients studied are given in table 1 . both basal and stimulated cortisol levels varied considerably, ranging from 203 to 2169 and from 269 to 2253 nmol/l, respectively. cortisol production was considered ªadequateº in 9 patients because their cortisol level increased by at least 200 nmol/l following synacthen (nos. 8, 13, 14, 17, 20) , which most investigators consider as a normal response [23] , or the pre-acth concentration was already above 1000 nmol/l (nos. 12, 15, 16, 19 ; see ªdis-cussionº). in the remaining 11 patients endogenous cortisol production was defined as ªinadequateº because there was neither a response to acth of at least 200 nmol/l nor a baseline concentration of more than 1000 nmol/l. in one female patient (no. 2) cortisol levels were strikingly low. she had no evidence, however, of preexisting adrenal insufficiency, and after recovery her endogenous basal cortisol level increased to 545 nmol/l (day 14). the median basal levels of total cortisol were 780 nmol/l (601±883) in patients with ªinadequateº and 1068 nmol/ l (706±1769) in patients with ªadequateº cortisol production (p < 0.05; table 2 ). the concentrations of free cortisol, as derived from the determination of transcortin and total cortisol levels, were 263 nmol/l (154±381) and 104 (79±154) in both groups (p < 0.01) and thus amounted to about 24.6 % and 13.3 % of the total cortisol concentration. it should be noted that the formula does not include albumin, which also binds cortisol at a lower affinity. although this may lead to a slight overestimation of the fraction of free cortisol, plasma albumin levels were low (2.85 g/dl, 2.62±3.53) and did not change significantly during the course of the study. following acth administration the increment in median total cortisol level was 16.1 % and 5.1 %, respectively. patients in the two groups did not differ with respect to their demographic data, disease severity, as assessed by apache ii score levels, organ dysfunction indices, the type of causative organisms, hemodynamics, or catecholamine doses. plasma cortisol levels following hydrocortisone therapy the time course of total and free cortisol levels during the hydrocortisone treatment period in patients with and without ªadequateº endogenous cortisol production is illustrated in fig. 1 free cortisol levels that were calculated from total cortisol and transcortin concentrations rose 8.5-fold to a median of 1210 nmol/l (750±1846) following the initiation of hydrocortisone therapy and did not differ be-tween patients with and those without ªadequateº endogenous cortisol production (fig. 1, lower panel) . before the initiation of hydrocortisone therapy patients on renal replacement therapy had lower total and free cortisol levels [620 (545±803) and 95 (79±196) vs. 945 (853±1503) and 262 (129±337) nmol/l; p < 0.04). however, during hydrocortisone therapy cortisol levels did not differ between patients receiving and those not receiving hemodialysis or hemofiltration, with the exception of day 2, when free cortisol was even higher in patients on renal replacement therapy [1057 (802±1481) vs. 430 (290±924) nmol/l; p < 0.01]. thus, as expected from the molecular size of cortisol, blood purification did not seem to contribute to differences or to the reduction in cortisol levels with prolonged therapy. hemodynamics and outcome following hydrocortisone therapy in all but two patients catecholamines could be reduced during the observation period. although systolic and mean arterial blood pressure blood did not differ between patients with and without ªinadequateº cortisol production (table 3) , the rate of reduction in noradrenaline did differ. figure 2 illustrates that patients with ªinadequateº cortisol production required 15 % (5±25 %) of the initial noradrenaline dose on day 2. in contrast, in patients with ªadequateº cortisol production noradrenaline could only be reduced by 50 % (21±88 %; p < 0.01). in addition, patients with ªinadequateº cortisol production were free of vasopressor support significantly earlier than patients with ªadequateº cortisol production (table 3 ). there was, however, no difference in survival or the course of inflammatory parameters between the two groups ( table 3) . to identify patient characteristics other than endogenous cortisol production that determine hemodynamic improvement during hydrocortisone therapy patients were divided into two groups according to whether the catecholamine dose could be reduced by at least 70 % within 48 h. fourteen of the 20 patients investigated ful-filled this criterion. as shown in table 4 , the only detectable difference between patients with and without rapid weaning from catecholamines was their endogenous cortisol production; 11 of 14 patients with rapid hemodynamic improvement, but none of the 6 patients with continued high catecholamine dependence, showed inadequate endogenous cortisol production. accordingly, free and total cortisol levels before the initiation of hydrocortisone therapy in patients with early improvement were significantly lower. during the course of corticosteroid therapy, however, cortisol levels were very similar in the two groups. there was also no difference between the groups with respect to patient demographics, the disease severity score, or the course of inflammatory parameters. the pretreatment cortisol status did not differ between survivors (n = 11) and nonsurvivors (n = 9). respective levels for total cortisol were 800 (560±1662) and 883 nmol/l (700±955). free cortisol levels also did not differ between the two groups [103 (79±376) vs. 187 (115±242) nmol/l]. the endogenous cortisol production was considered ªinadequateº in 6 of 11 survivors and in 5 of 9 nonsurvivors. this study confirms that endogenous plasma cortisol concentrations are increased in patients with septic shock, but that the degree of increase is highly variable. only one of 20 patients had a post-acth cortisol plasma level less than 500 nmol/l, which supports the concept that absolute adrenocortical deficiency is rare in critically ill patients [5, 6, 7, 26, 27, 28, 29, 30, 31] . whether increases in cortisol levels are appropriate for the severity of illness or reflect functional hypoadrenalism is less clear. the rapid acth test is widely used as a simple method to identify adrenocortical hyporesponsiveness, but controversy exists as to how diagnostic criteria should be derived from the basal cortisol level, the stimulated cortisol level or their difference [24] . three [6, 7, 32] of four previous studies [6, 7, 28, 32] using a rapid acth test to investigate adrenocortical function in septic shock used an increment of at least 200±250 nmol/l to define ªadequateº responsiveness. however, in healthy controls, large unselected patient populations and also in patients with septic shock [7, 24] (table 1) cortisol increments are inversely related to basal levels. as discussed previously [24, 29, 31, 33, 34] , it appears likely that disproportionately low increments from high basal cortisol concentrations are due to maximal endogenous acth and therefore not indicative of adrenocorticoid hyporesponsiveness. we have thus arbitrarily defined endogenous cortisol production as ªadequateº in patients not only when they responded significantly to acth, but also when their baseline level was above 1000 nmol/l. using these criteria endogenous cortisol production appeared to be ªinadequateº in approximately one-half of our patients. it should be noted, however, that an elevation in plasma cortisol levels above 1000 nmol/l in septic patients may reflect not only increased production but could also indicate decreased hepatic cortisol clearance [35, 36] . following cortisol therapy with a similar dose as in previous trials (340 mg/day) [18, 19, 20, 21 ], peak concentrations of total cortisol were almost threefold higher than the post-acth level in patients considered to have an ªadequateº endogenous response. these peak concentrations corresponded to a 6.2-fold and 9.1-fold increase in free cortisol concentrations in patients with and without ªadequateº endogenous production. clearly therefore this dose must be considered as supraphysiological. interestingly, although the median total cortisol level remained above 1000 nmol/l thereafter, the levels of both total and free cortisol declined progressively during the course of therapy despite the administration of a constant dose of 10 mg/h. a similar, albeit less pronounced reduction in cortisol levels during ªlow-doseº hydrocortisone therapy has also been reported by briegel et al. [22] . however, in this study the dose was tapered after only 1±5 days. therefore, in contrast to our observations, the decline could be attributed to the reduction in steroid dose. even if one assumes that in the present investigation the administration of acth and the initial bolus injection of 100 mg contributed to the peak concentration on day 1, and that endogenous cortisol production was subsequently suppressed, these two effects cannot entirely explain the reduction in median level from 3587 to 1310 nmol/l between days 1 and 7. moreover, it has been shown that cortisol production in septic shock is in fact not suppressible by steroids [34] . it is possible therefore that the metabolism of hydrocortisone changes during the treatment period. earlier studies have shown that cortisol extraction from blood is decreased, and that its half-life is prolonged during septic shock [35, 36] . the observed decline in cortisol levels during the treatment period could reflect a reversal of these alterations. our intention in this study was not to compare the efficacy of cortisol therapy to untreated controls. it is nevertheless noteworthy that inotropes could be reduced rather quickly after initiation of the hydrocortisone infusion. the reduction in catecholamine dose was comparable [21] or even faster than in previous studies using similar doses of cortisol [18, 20] , which may be related to the fact that we started therapy earlier during the septic course. in contrast to previous reports [6, 7, 28], mortality in patients with ªinadequateº endogenous cortisol was not higher (table 2) , and survivors and nonsurvivors did not have significantly different steroid levels fig. 2 time course of noradrenaline needs in patients with and without adequate endogenous cortisol production (cp). two days after the beginning of ªlow-doseº hydrocortisone therapy patients with inadequate endogenous production needed significantly less vasopressor support before hydrocortisone therapy. this observation is in accordance with the results of a recent randomized controlled trial by bollaert et al. [20] and is compatible with the growing evidence that ªlow-doseº therapy reverses the putative adverse risk factor of inappropriate cortisol production [18, 19, 20, 21, 22] . although patients with ªadequateº and ªinadequateº endogenous production were clinically indistinguishable at the onset of therapy (table 2) , and did not have significantly different levels of free cortisol during hydrocortisone therapy (fig. 1) , those with ªinadequateº endogenous response showed a significantly faster reduction in their vasopressor support (fig. 2) . in addition, the only detectable difference between patients with and without rapid hemodynamic improvement was their basal cortisol status (table 4) . similarly brie-gel et al. [18] found that 6 of 14 patients weaned from catecholamines within 48 h of hydrocortisone therapy had lower cortisol levels before steroid treatment. in addition, annane et al. [32] reported that patients in septic shock with inadequate as compared to adequate cortisol production had lower baseline responsiveness to noradrenaline but showed a more marked improvement in vasopressor sensitivity in response to hydrocortisone bolus. it appears therefore that the greatest benefit of hydrocortisone therapy is achieved in patients with blunted endogenous production, although the exact mechanisms of this sensitization remain to be elucidated. in contrast to the hemodynamic changes the febrile response and the course of c-reactive protein and procalcitonin levels or white blood cell counts did not depend on the pretreatment cortisol production (table 3) . thus, although it has been suggested that hydrocortisone treatment reduces indicators of the acute-phase response [22] , these effects appear not to be directly related to the hemodynamic effects. we also found no association between the hemodynamic response and survival rates (table 4 ), but larger controlled trials are necessary to exclude or verify effects on such outcome parameters. in conclusion, there are three main findings of this study. firstly, during proposed regimes of ªlow-doseº hydrocortisone therapy plasma cortisol concentrations are achieved initially which considerably exceed basal and acth stimulated levels. thus in order to achieve a substitution of deficient endogenous production, doses even lower than those used so far may be sufficient. secondly, during continuous administration of hydrocortisone, cortisol levels decline, suggesting that changes in cortisol metabolism have a significant impact on its plasma concentration. thirdly, ªinadequateº endogenous steroid production appears to sensitize patients to the hemodynamic effects of a rise in plasma cortisol levels. a 3-level prognostic classification in septic shock based on cortisol levels and cortisol response to corticotropin the effects of high-dose corticosteroids in patients with septic shock a controlled clinical trial of high-dose methylprednisolone in the treatment of severe sepsis and septic shock effect of high-dose glucocorticoid therapy on mortality in patients with clinical signs of systemic sepsis corticosteroid treatment for sepsis: a critical appraisal and metaanalysis of the literature steroid controversy in sepsis and septic shock: a metaanalysis contribution of cortisol deficiency to septic shock abrupt hemodynamic improvement in late septic shock with physiological doses of glucocorticoids stress doses of hydrocortisone reverse hyperdynamic septic shock: a prospective, randomized, double blind low-dose hydrocortisone infusion attenuates the systemic inflammatory response syndrome screening for adrenocortical insufficiency with cosyntropin rapid adrenocorticotropic hormone test in practice clinical use of unbound plasma cortisol as calculated from total cortisol and corticosteroid binding globulin adrenocortical function: an indicator of severity of disease and survival in chronic critically ill patients spectrum of serum cortisol response to acth in icu patients adrenal insufficiency occurring during septic shock: incidence, outcome, and relationship to peripheral cytokine levels plasma cortisol levels in patients with septic shock a comparison of the adrenocortical response during septic shock and after complete recovery adrenocortical function during septic shock gajdos p (1998) impaired pressor sensitivity to noradrenaline in septic shock patients with and without impaired adrenal function reserve adrenal insufficiency hypercortisolism in septic shock is not suppressible by dexamethasone infusion comparative studies on adrenal cortical function and cortisol metabolism in healthy adults and in patients with shock due to infection secretion and metabolism of cortisol after injection of endotoxin key: cord-264427-frrq4h39 authors: huang, ling; liu, ziyi; li, hongli; wang, yangjun; li, yumin; zhu, yonghui; ooi, maggie chel gee; an, jing; shang, yu; zhang, dongping; chan, andy; li, li title: the silver lining of covid‐19: estimation of short‐term health impacts due to lockdown in the yangtze river delta region, china date: 2020-07-07 journal: geohealth doi: 10.1029/2020gh000272 sha: doc_id: 264427 cord_uid: frrq4h39 the outbreak of covid‐19 in china has led to massive lockdowns in order to reduce the spread of the epidemic and control human‐to‐human transmission. subsequent reductions in various anthropogenic activities have led to improved air quality during the lockdown. in this study, we apply a widely used exposure‐response function to estimate the short‐term health impacts associated with pm(2.5) changes over the yangtze river delta (yrd) region due to covid‐19 lockdown. concentrations of pm(2.5) during lockdown period reduced by 22.9% to 54.0% compared to pre‐lockdown level. estimated pm(2.5)‐related daily premature mortality during lockdown period is 895 (95% confidential interval: 637‐1081), which is 43.3% lower than pre‐lockdown period and 46.5% lower compared with averages of 2017‐2019. according to our calculation, total number of avoided premature death associated pm(2.5) reduction during the lockdown is estimated to be 42.4 thousand over the yrd region, with shanghai, wenzhou, suzhou (jiangsu province), nanjing, and nantong being the top five cities with largest health benefits. avoided premature mortality is mostly contributed by reduced death associated with stroke (16.9 thousand, accounting for 40.0%), ischemic heart disease (14.0 thousand, 33.2%) and chronic obstructive pulmonary disease (7.6 thousand, 18.0%). our calculations do not support or advocate any idea that pandemics produce a positive note to community health. we simply present health benefits from air pollution improvement due to large emission reductions from lowered human and industrial activities. our results show that continuous efforts to improve air quality are essential to protect public health, especially over city‐clusters with dense population. the outbreak of the tragic coronavirus disease 2019 by the end of 2019 has caused tremendous impacts on people's life around the world. at the time of this writing (may 6 th , 2020), covid-19 has made more than 3.6 million people sick and led to more than 257,301 deaths worldwide (https://www.statista.com/statistics/, last access on may 6 th , 2020). during its peak, the pandemic at one point caused over 15,000 new confirmed cases in china in just one single day back in february, and presently in may very few new local infections are reported in china (http://www.nhc.gov.cn/, last access on may 6 th , 2020). the effective containment of covid-19 within china is mostly attributed to a series of prevention and control measures implemented rapidly by the chinese government. starting from late january 2020, national emergency response policies were launched in china in order to reduce the intensity of the spread of the epidemic to slow down the increase of number of new cases, including but not limited to: schools shut down, traffic strictly restricted, industries and construction activities suspended, mass gatherings and events cancelled or suspended, and social distancing become the new norm. as a result of the massive lockdown, emissions of primary air pollutants from various human and industrial activities decreased substantially and pm2.5 concentrations during covid-19 lockdown in china has been shown to be much better than previous years during the same period (nasa, 2020; wang et al., 2020a) . it is well known that poor air quality, with pm2.5 (particulate matters with aerodynamic diameters less than 2.5 μm) being a key criteria pollutant, could have adverse health impacts (boldo et al., 2011; cao et al., 2012; song et al., 2017) and lead to premature mortality (fang et al., 2016; liu et al., 2016; lu et al., 2015b ). an integrated exposure risk (ier) model for pm2.5 exposure-response function is widely used to estimate the premature mortality attributed to pm2.5 exposure. for example, maji et al. (2018) estimates pm2.5-related long-term premature mortality for 161 cities in china for year 2015 as well as the potential health benefits of air pollution control policies for year 2020. wang et al. (2020b) calculates the number of premature death due to acute and chronic exposure of ambient pm2.5 in china during 2013-2017. with substantial reductions in pm2.5 concentrations due to covid-19 lockdown, a follow-up question is what are the health impacts of the short-term changes in air quality. the yangtze river delta (yrd) region is one of the most economic developed and populated regions in china. in the past, the yrd region has frequently experienced heavy haze pollution (cheng et al., 2013; wang et al., 2015) . with various control strategies continuously being carried out, the overall air quality over the yrd region has greatly improved for the past few years (ministry of ecology and environment of china, 2019). according to the latest report released by the ministry of ecology and environment of china (ministry of ecology and environment of china, 2020, 40 out of 41 cities in the yrd region has successfully met the goals of reducing pm2.5 concentrations during the 2019-2020 fall and winter season. in our most recent study , we investigate the air quality changes over the yrd region due to lowered human activities during covid-19 lockdown using multipollutant observations and photochemical model simulations. in this follow-up study, we attempt to quantify the short-term health impacts associated with pm2.5 changes over the yrd region due to covid-19 lockdown. we estimate the premature mortality associated with pm2.5 exposure before lockdown and during lockdown periods. utilizing simulated results based on an integrated meteorology and air quality modeling system, we estimate the number of avoided premature death due to lowered pm2.5 concentrations during covid-19 lockdown over the yrd region. methods and results from our previous study are partially adopted in this study to support health related estimation. 2.1 quantitative analysis pm2.5 changes due to covid-19 lockdown the yangtze river delta (yrd) region, consisting of shanghai, jiangsu, zhejiang, and anhui province (fig. 1) , is one of the most economic developed and populated regions in china. on january 23 th 2020, being one of three earliest provinces (the other two being hunan and guangdong provinces), zhejiang province (located in south of the yrd region) announced provincial lockdown as "level i" (particularly serious) response, followed by shanghai and anhui province on the next day and jiangsu province two days later. coincided with the chinese spring festival (january 24 th -february 1 st , 2020), all kinds of human activities were greatly reduced during level i response period. with the epidemic gradually controlled, emergency response in anhui and jiangsu province was downgraded to level ii (serious) on february 25 th , followed by zhejiang province on march 2 nd . shanghai announced level ii response on march 24 th due to high numbers of imported infectious cases. same as previous study, we define pre-lockdown period as january 1 st -january 23 rd , level i response period as january 24 th -february 25 th , and level ii response period as february 26 th -march 31 st . fig. 1 location of the yangtze river delta (yrd) region with city-level population to quantify the changes of air quality caused by reduced human activities during covid-19 lockdown, the integrated weather research forecasting model (wrf) -comprehensive air quality model with extensions (camx) modeling system is used ). details of model configurations and input data can be found in and are briefly summarized here. the integrated wrf/camx model is applied to simulate air quality over the yrd region ( fig. 1 ) during pre-lockdown, level i response, and level ii response periods. two parallel simulations are conducted with two sets of anthropogenic emissions while keeping all other inputs and model configurations identical. for the base case simulation, the baseline emissions (i.e. emissions from normal activities assuming no lockdown) are used. for the covid-19 scenario, emissions estimated based on reduced human activities due to lockdown are applied. for emission reductions outside the yrd region during lockdown, we applied the reduction ratio used by wang et al. (2020a) . the relative improvement factor (rf) is defined as the ratio of simulated concentrations between the two scenarios and is applied to the observed concentrations to obtain the theoretical concentrations of air pollutants that would be if there is no lockdown. results of model performance evaluation of the covid-19 scenario show acceptable agreement between simulated and observed results ). 2.2 premature mortality due to short-term pm2.5 exposure we estimate the premature mortality due to ambient pm2.5 exposure based on a widelyused log-linear exposure-response function below (fang et al., 2016; gao et al., 2016) : where y is the number of premature deaths caused by ambient pm2.5 exposure due to five leading causes (k=5): cerebrovascular disease (stroke), ischemic heart disease (ihd), chronic obstructive pulmonary disease (copd), lung cancer (lc) for adults (≥ 25 years), and acute lower respiratory infection (alri) for infants (< 5 years). β is the cause-specific exposureresponse coefficients and values reported from a meta-analysis study (lu et al., 2015a) are utilized in this study. for an increase of 10 μg/m 3 pm2.5, β is 0.63% [95% confidential interval (ci): 0.35% -0.9%] for cardiovascular disease (i.e. stoke、ihd) and 0.75% (95% ci: 0.39% -1.11%) for respiratory disease (i.e. copd, alri, lc). the baseline incidence rate (r) at provincial level is obtained from the sixth national population census (http://www.stats.gov.cn/tjsj/pcsj/rkpc/6rp/indexch.htm, last access on 20 th april, 2020) and the contribution of individual disease to total mortality are based on the national estimates from the global burden of diseases (gbd) project of institute for health metrics and evaluation (ihme) and health effects institute (hei) for year 2017 (https://vizhub.healthdata.org/gbd-compare/, last access on 23 rd april, 2020). according to gbd study, stroke, ihd, copd, lc, and alri contribute 20.2%, 16.7%, 9.2%, 6.4% and 1.7% of total deaths in china for year 2017. p is the exposed population for each city in the yrd region and is obtained from statistical yearbooks for year 2018. the exposed pm2.5 concentration in eq. (4) 3.2 premature mortality attributable to short-term pm2.5 exposure ambient pm2.5 exposure leads to higher mortality in infants (<5 years) from alri and in adults (≥25 years) due to stroke, ihd, copd and lc. we calculate the premature mortality due to above-mentioned causes based on the health impact function (eq. 1) over the yrd region during pre-lockdown, level i, and level ii period of 2017-2020 (fig. 4) . during the pre-lockdown period, the total premature mortality attributed to pm2.5 exposure are relatively consistent during 2017-2020 and the number in 2020 is 36.4 thousand (95% ci: 30.4-38.8 thousand) for the whole yrd region. stroke and ihd contribute to 14.4 thousand (95% ci: 12.0-15.4 thousand) and 11.9 thousand (95% ci: 10.0-12.7 thousand) premature death, together accounting for 72.2% of total pm2.5-related premature death. copd, lc, and alri contribute the rest 27.8% of pm2.5-related death, each causing 6.8 thousand (95% ci: 5.7-7.2 thousand), 3.2 thousand (95% ci: 2.7-3.4 thousand), and 0.05 thousand (95% ci: 0.04-0.06 thousand) premature death during the pre-lockdown period. during level i and level ii response periods, while the premature morality due to pm2.5 exposure during 2017-2019 fluctuated a little bit, a sharp decrease is observed for year 2020. the total pm2.5-related premature mortality during 2020 level i and level ii period is 33.2 thousand (95% ci: 23.9-38.5 thousand) and 27.7 thousand (95% ci: 18.6-33.8 thousand), which dropped by 32.3% and 47.7% compared to the 2017-2019 average values. the relative contributions from different diseases remain unchanged since the only changing variable here is the pm2.5 concentration. in order to directly compare the premature mortality before and during lockdown (level i plus level ii period), we present the daily premature mortality in figure s1 . during 2017-2019, the daily premature morality dropped by 3.2% to 12.7% before and during lockdown. for 2020, the daily premature mortality across the yrd region during pre-lockdown is 1.6 thousand (95% ci: 1.3-1.6 thousand) and 0.9 thousand (95% ci: 0.6-1.1 thousand) during lockdown (level i + level ii), representing a sharp decrease by 43.3%. the significant reduction in premature mortality during lockdown periods, whether it is compared to the prelockdown periods of the same year, or the same periods of historical years, indicate substantial health benefits associated with lowered pm2.5 concentrations due to covid-19 lockdown. fig. 4 premature mortality due to lc, stroke, ihd, copd during pre-lockdown, level i and level ii periods of 2017-2020 (data for alri is not shown due to low numbers). estimated premature mortality with the assumption of no-lockdown is also shown for level i and level ii. table s1-s4 shows the city-level premature mortality by period and year. with the same base incidence rate (r) being applied for all cities, city-level premature mortality depends on the exposed population and the pm2.5 concentration for that city. with the largest population (2418.3 thousand in 2018) of all cities in the yrd region (fig. 5) , shanghai has the highest pm2.5-related premature mortality during 2017-2019, even though its averaged pm2.5 concentration is ranked the 6 th , 9 th , and 9 th from the bottom during january to march of 2017, 2018 and 2019 (fig. 5) . on the other hand, high premature mortality for cities like bozhou and suqian in anhui province (for example, each is ranked 3 rd and 7 th during level ⅱ period in 2020 in terms of premature mortality) is more associated with the high pm2.5 concentrations, which is ranked 3 rd and 7 th in terms of average pm2.5 concentration and only 13 rd and 19 th in terms of population. cities with small population and low pm2.5 concentrations, for example, huangshan (in anhui province), has the lowest premature mortality. during 2020 pre-lockdown, daily premature mortality due to pm2. an integrated wrf/camx model is used to estimate pm2.5 concentrations that would be during level i and level ii period if no lockdown occurred. the potential health benefits due to lockdown are estimated as the differences of premature mortality calculated based on the observed pm2.5 concentrations and the simulated 'no lockdown' concentrations during the same periods. if no lockdown occurred, the total premature death due to pm2.5 exposure over the yrd region would be 50.2 thousand (95% ci: 42.0-53.3 thousand) during level i period and 52.9 thousand (95% ci: 40.6-58.5 thousand) during level ii period. these two numbers are lower than the corresponding values of 2017-2019 (except for level i in 2017 and 2019). the total avoided premature mortality over the yrd region is 17.1 thousand during level i and 25.1 thousand during level ii (table s5) , each representing 51.5% and 90.6% of current premature mortality rate due to pm2.5 exposure. in terms of diseases, avoided premature morality due to ihd and stroke are 14.0 thousand and 16.9 thousand, each accounting for 33.2% and 40.1% of current base incident mortality rate due to pm2.5. avoided premature mortality due to copd, lc, and alri are 7.6 thousand, 3.6 thousand, and 0.06 thousand. figure 6 shows the city-level total avoided premature mortality during level i and level ii (avoided premature mortality associated with different diseases are shown in fig. s2 ). the number of avoided premature death for different cities depends on the base population and the changes in pm2.5 concentrations due to lockdown ( , and suzhou (anhui province, 1449, 95% ci: 1027-1672). a few of uncertainties exist with our estimation of health impacts, which are also recognized in a couple of similar studies maji et al., 2018; wang et al., 2020) . first and foremost, there exists uncertainties with the parameters (e.g. concentrationresponse coefficient, threshold concentration) used in the integrated exposure-response function. although limitation exists, we use values reported by lu et al.(2015a) , which is developed based on meta-analysis of 59 studies covering 22 cities in mainland china (3 cities in the yrd region), hong kong and taiwan to reduce uncertainties with the concentrationresponse coefficient. other simple assumptions when using the exposure-response function include that the exposure-response coefficient does not vary by age and a provincial-level base incidence rate is used for all cities within the province. secondly, we only calculate the premature mortality associated with pm2.5 exposure while realizing that synergistic effects exist when exposed to other or multiple pollutants (apte et al., 2015; billionnet et al., 2012) and our estimation of premature mortality perhaps represents an underestimation. on top of that, the influences of pm2.5 chemical composition, size distribution and sources on health impacts (ostro et al., 2015) are ignored in this study. when we calculate city-level population exposure, we ignore the heterogeneities of ambient pm2.5 concentrations and population within the city. we simply use the observed pm2.5 concentrations averaged over all monitoring sites within the city as the exposed concentrations. this introduces underestimation when more monitoring sites are located in the rural and less populated areas and overestimation when monitoring sites are more likely to be located in urban and populous areas for a city. ways to improve the spatial accuracy of health impact estimation include utilizing population distribution with high spatial resolution and interpolated pm2.5 concentrations based on networks of low-cost sensors (cavaliere et al., 2018; holstius et al., 2014) or satellite based data (e.g. the aerosol optical depth, chen et al., 2019; . finally, estimations of avoided premature death assuming no-lockdown are associated with uncertainties with the meteorology and air quality model. this may all be explored in the near-future when more data are available. in this study, we attempt to quantify the health impacts associated pm2.5 improvement due to reduced human and industrial activities during covid-19 lockdown in the yangtze river delta region, a region with heavy air pollution during the past years. as a result of reduced human activities, concentrations of pm2.5 during lockdown periods reduced by 22.9% to 54.0% over the yrd region compared to pre-lockdown level. avoided premature death due to lowered pm2.5 concentrations are estimated to be 42.2 thousand over the yrd region, representing 69.3% of the total present premature mortality due to pm2.5 exposure. the avoided premature mortality is mostly contributed by reduced death associated with stroke (16.9 thousand, accounting for 40.0%), ischemic heart disease (14.0 thousand, 33.2%) and chronic obstructive pulmonary disease (7.6 thousand, 18.0%). top five cities with highest avoided premature mortality are shanghai (5433 persons), wenzhou (2751), suzhou (in jiangsu province, 2660), nanjing (2000) and nantong (1969) . the outbreak of covid-19 is disastrous. this study is by no means to suggest that pandemics are bringing a positive effect for health. the passive emission reductions during covid-19 lockdown provide a good opportunity to show the health impacts related to reduction in pm2.5. it merely reinforces our knowledge that pm2.5 has detrimental health effects. results from our study suggested that substantial health benefits could be achieved with reduced pm2.5 concentrations by emission reductions, which confirms the importance of recent efforts to mitigate the haze pollution nationwide. however, although pm2.5 concentration decreased substantially during lockdown period, the residual pm2.5 concentrations are still much higher than the recommended 24-hr standard by who. continuous efforts are needed to reduce emissions in the long term and through the most cost-effective ways in order to protect public health. addressing global mortality from ambient estimating the health effects of exposure to multi-pollutant mixture health impact assessment of a reduction in ambient pm2.5 levels in spain fine particulate matter constituents and cardiopulmonary mortality in a heavily polluted chinese city development of low-cost air quality stations for next generation monitoring networks: calibration and validation of pm2.5 and pm10 sensors, sensors (basel) extreme gradient boosting model to estimate pm2.5 concentrations with missing-filled satellite data in china long-term trend of haze pollution and impact of particulate matter in the yangtze river delta mortality effects assessment of ambient pm2.5 pollution in the 74 leading cities of china improving air pollution control policy in china-a perspective based on cost-benefit analysis field calibrations of a low-cost aerosol sensor at a regulatory monitoring site in california air quality changes during the covid-19 lockdown over the yangtze river delta region: an insight into the impact of human activity pattern changes on air pollution variations integrating low-cost air quality sensor networks with fixed and satellite monitoring systems to study ground-level pm2.5. atmospheric environment estimating adult mortality attributable to pm2.5 exposure in china with assimilated pm2.5 concentrations based on a ground monitoring network systematic review and metaanalysis of the adverse health effects of ambient pm2.5 and pm10 pollution in the chinese population short-term effects of air pollution on daily mortality and years of life lost in nanjing estimating premature mortality attributable to pm2.5 exposure and benefit of air pollution control policies in china for 2020 china air quality improvement report 2020. reports on the completion of environmental air quality targets for the autumn and winter of 2019-2020 in key areas associations of mortality with long-term exposures to fine and ultrafine particles, species and sources: results from the california teachers study cohort health burden attributable to ambient pm2.5 in china analysis of a severe prolonged regional haze episode in the yangtze river delta severe air pollution events not avoided by reduced anthropogenic activities during covid-19 outbreak. resources, conservation and recycling acute and chronic health impacts of pm2.5 in china and the influence of interannual meteorological variability this study is financially sponsored by the shanghai science and technology innovation plan monitoring service). the baseline incidence rate (r) at provincial level is obtained from ©2020 american geophysical union. all rights reserved.http://www.stats.gov.cn/tjsj/pcsj/rkpc/6rp/indexch.htm (the sixth national population census). the contribution of individual disease to total mortality are based on the national estimates from https://vizhub.healthdata.org/gbd-compare/ (the global burden of diseases (gbd) project of institute for health metrics and evaluation (ihme) and health effects institute (hei) for year 2017). key: cord-003520-f3jz59pt authors: arabi, yaseen m.; tamimi, waleed; jones, gwynne; jawdat, dunia; tamim, hani; al-dorzi, hasan m.; sadat, musharaf; afesh, lara; sakhija, maram; al-dawood, abdulaziz title: free fatty acids’ level and nutrition in critically ill patients and association with outcomes: a prospective sub-study of permit trial date: 2019-02-13 journal: nutrients doi: 10.3390/nu11020384 sha: doc_id: 3520 cord_uid: f3jz59pt objectives: the objectives of this study were to evaluate the clinical and nutritional correlates of high free fatty acids (ffas) level in critically ill patients and the association with outcomes, and to study the effect of short-term caloric restriction (permissive underfeeding) on ffas level during critical illness. patients/method: in this pre-planned sub-study of the permit (permissive underfeeding vs. target enteral feeding in adult critically ill patients) trial, we included critically ill patients who were expected to stay for ≥14 days in the intensive care unit. we measured ffas level on day 1, 3, 5, 7, and 14 of enrollment. of 70 enrolled patients, 23 (32.8%) patients had high ffas level (baseline ffas level >0.45 mmol/l in females and >0.6 mmol/l in males). results: patients with high ffas level were significantly older and more likely to be females and diabetics and they had lower ratio of partial pressure of oxygen to the fraction of inspired oxygen, higher creatinine, and higher total cholesterol levels than those with normal ffas level. during the study period, patients with high ffas level had higher blood glucose and required more insulin. on multivariable logistic regression analysis, the predictors of high baseline ffas level were diabetes (adjusted odds ratio (aor): 5.36; 95% confidence interval (ci): 1.56, 18.43, p = 0.008) and baseline cholesterol level (aor, 4.29; 95% ci: 11.64, 11.19, p = 0.003). serial levels of ffas did not differ with time between permissive underfeeding and standard feeding groups. ffas level was not associated with 90-day mortality (aor: 0.49; 95% ci: 0.09, 2.60, p = 0.40). conclusion: we conclude that high ffas level in critically ill patients is associated with features of metabolic syndrome and is not affected by short-term permissive underfeeding. fatty acids are a major source of fuel in the body and play an important role in cell signaling [1, 2] . free fatty acids (ffas) are nonesterified fatty acids that are released by the hydrolysis of triglycerides (triglyceride molecule is composed of three fatty acid molecules bound to glycerol) within the adipose tissue by lipoprotein lipase. they circulate in the blood protein-bound, serving as an energy source for tissues [1, 3] . chronically elevated ffas level has been observed in obese people and in diabetic patients and is associated with insulin resistance and with sudden death in middle-aged men without known ischemic heart disease [1, 3, 4] . acutely, ffas level is often increased during critical illness and may contribute to organ dysfunction. critical illness is characterized by hypercatabolic state and by a change in the contribution of the endogenous protein, fat, and carbohydrate sources to oxidative fuel [5] . lipolysis is accelerated by the high catecholamine and other stress hormone milieu leading to increased release of ffas from adipocytes, thus, increasing ffas level [6] . insulin resistance during critical illness impairs the use of ffas for energy, and, thus, contributes to increased ffas level [7] . heparin given during critical illness may also increase ffas level by activating lipoprotein lipase [8] . ffas may have toxic effects by increasing reactive oxygen species leading to cell death and necrosis [9] and by depressing the immune cell function [10] . in addition, ffas potentiate insulin resistance and impair glucose metabolism by inhibiting glucose oxidation and by stimulating protein kinase c [1, 10] . in an acute setting, elevated ffas level has been associated with the development of acute lung injury in at-risk patients with sepsis, trauma, and pancreatitis and after on-pump coronary artery bypass grafting [8, 11, 12] . can ffas level in critically ill patients be modulated by short-term caloric restriction (permissive underfeeding)? normally, serum ffas level increases during fasting and exercise and after a fatty meal. ffas level goes down postprandially due to the anti-lipolytic effect of insulin that is released after carbohydrate intake [13] . on the other hand, caloric restriction and weight loss lead to a lowering of ffas level and can attenuate ffas-induced hepatic insulin resistance in obese healthy patients [14] [15] [16] . however, the effect of caloric restriction on serum ffas level has not been investigated in critically ill patients. the aims of this study were (1) to evaluate the clinical and nutritional correlates of high ffas level in critically ill patients and the association with outcomes, and (2) study the effect of short-term caloric restriction (permissive underfeeding) on ffas level during critical illness. this is a pre-planned sub-study of the permit [17] (permissive underfeeding vs. target enteral feeding in adult critically ill patients-isrctn68144998) trial, in which critically ill patients were randomized to permissive underfeeding (40-60% of calculated caloric requirements) or standard feeding (70-100%) for up to 14 days while maintaining similar protein intake in both groups. the trial found no difference in the primary endpoint of 90-day mortality. in this sub-study which was separately funded by king abdulaziz city for science and technology (kacst), riyadh, saudi arabia (grant number-at 32-25 kacst), we enrolled consecutive patients from the permit trial at king abdulaziz medical city, riyadh, saudi arabia between september 2012 and september 2014 who were expected to stay ≥14 days in the intensive care unit as judged by their primary team. a separate informed consent was obtained for participation in this sub-study. the study was approved by the institutional board review of the ministry of the national guard health affairs, riyadh, saudi arabia. blood was collected at the time of enrollment (baseline or study day 1) within 48 h of icu admission and on days 3, 5, 7, and 14. serum was prepared from the blood samples by centrifugation at 4 • c at 1600 g for 20 min and divided into aliquots. these aliquots were stored immediately in a designated freezing area at −70 • c to be analyzed once the sample size was completed. the samples were analyzed blindly, and then the sample codes were broken. the measurement of ffas was performed in bioscientia reference laboratory in germany using an in-vitro enzymatic calorimetric assay with wako-nefa-hr (2) reagent (wako-chemicals, neuss, germany) [18] . in this method, ffas with the coexistence coenzyme a (coa) and adenosine-5'-triphosphate (atp) disodium salt were converted to acyl-coa, adenosine monophosphate (amp) and pyrophosphoric acid by the action of acyl-coa synthetase (acs). the acyl-coa was oxidized yielding 2,3-trans-enoyl-coa and hydrogen peroxide (h 2 o 2 ) by the action of acyl-coa oxidase. in the presence of peroxidase, h 2 o 2 yielded a blue-purple pigment by quantitative oxidation condensation with 3-methyl-n-ethyl-n-(β-hydroxyethyl)-aniline (meha) and 4-aminoantipyrine (4aa). ffas level was obtained by measuring absorbance at the blue and purple color at wavelengths of 546 nm and 660 nm. the normal fasting serum ffas level is 0.1 to 0.45 mmol/l for females and 0.1 to 0.6 mmol/l for males. however, ffas level in the critically ill is poorly studied. in the current study, patients with ffas more than 0.45 mmol/l in females and 0.6 mmol/l in males were considered to have high ffas level; otherwise ffas level was considered normal. baseline data included demographics, acute physiology and chronic health evaluation scores (apache) ii [19] , presence of sepsis upon admission, sequential organ failure assessment (sofa) score [20] , the ratio of partial pressure of arterial oxygen to the fraction of inspired oxygen (pao 2 :fio 2 ), glasgow coma scale and various laboratory results (baseline blood glucose, hemoglobin, international normalized ratio(inr), platelets, bilirubin, creatinine, c-reactive protein, albumin, pre-albumin, transferrin, 24-h urinary urea nitrogen excretion, and nitrogen balance). for the intervention period, which lasted for up to 14 days, we collected daily nutritional data (feeding formula and calories from enteral feeds, propofol, intravenous dextrose, and parenteral nutrition), insulin dose for hyperglycemia management, daily blood glucose, and use of certain medications, such as aspirin, beta-blockers and statins. we noted the daily carbohydrate, fat, and protein calories from enteral and parenteral sources and then calculated the total fat-to-carbohydrate ratio by dividing fat calories by carbohydrate calories. the outcomes evaluated in this study were 28-, 90-, and 180-day all-cause mortality. other outcomes included hospital and icu mortality, incident renal replacement therapy, icu-associated infections [21] , icu and hospital length of stay (los), and mechanical ventilation duration. in addition, icu-free days, renal replacement therapy-free days, and ventilator-free days were also calculated. we reported categorical variables as frequencies with percentages and continuous variables as medians with quartile 1 and 3 (q1, q3). we compared categorical variables using chi-square or fisher's exact test and continuous variables using mann-whitney u test. we examined pearson correlation among the following baseline variables ffas level, age, body mass index, total cholesterol, high-density lipoprotein (hdl) cholesterol, low-density lipoprotein (ldl) cholesterol, non-hdl cholesterol, triglycerides, glucose, and hemoglobin a1c. in addition, multivariable logistic regression analysis was performed to assess the predictors of high ffas level. we entered in the model a priori decided baseline variables that were of clinical interest and/or had significant association with high ffas level by univariable analysis (p ≤ 0.05) which included age, gender, body mass index (bmi), apache ii, diabetes, triglycerides, ldl cholesterol, hdl cholesterol, medical admissions (vs. non-medical admissions), and randomization (permissive vs. standard feeding). we also carried out a linear mixed model to test whether ffas level is affected over time with permissive underfeeding compared to standard feeding. we carried out logistic and linear regression models to examine the association between ffas level and outcomes adjusting for age, gender, bmi, apache ii, diabetes, triglycerides, ldl cholesterol, hdl cholesterol, non-hdl cholesterol, and medical admissions (vs. non-medical admissions). a two-tailed p value < 0.05 was considered statistically significant. the results were expressed as adjusted odds ratio (aor) or parameter estimate with 95% confidence intervals (95%ci). all statistical analyses were performed using sas version 9.2 (sas institute, cary, nc, usa). of the 70 patients included in the study ( figure s1 ), 23 (32.8%) had high ffas level (median 0.74 mmol/l (q1, q3: 0.63, 1.06) and 47 (67.1%) had normal ffas level (0.34 mmol/l (0.22, 0.45))). patients with high ffas level were significantly older, more likely to be females and diabetic, had higher hgba1c, creatinine and non-hdl and ldl cholesterol levels, and had lower pao 2 :fio 2 ratio compared with patients with normal ffas (table 1 ). there were significant correlations between ffas level and total cholesterol (r = 0.45, p = 0.0001), non-hdl cholesterol (r = 0.38, p = 0.002), hdl cholesterol (r = 0.30, p = 0.01) and ldl cholesterol (r = 0.43, p = 0.0003), and age (r = 0.40, p = 0.0006) ( table s1 ). table 2 shows the nutritional data during the study period and the trial co-interventions. the total daily caloric intake was 1065. ffas level (p = 0.37). the baseline blood glucose was similar in the two groups; however, during the study period, the glucose level was significantly higher in patients with high ffas level compared to patients with normal ffas level (9.5 mmol/l (q1, q3: 7.6, 11.9) compared to 7.7 mmol/l (q1, q3: 6.5, 10.3) p= 0.05) with higher use of insulin (18.9 units per day (q1, q3: 3.8, 37.5) compared to 0.0 units per day (q1, q3: 0.0, 14.9) p = 0.003). additionally, more patients in the high ffas level received disease-specific formulae, renal replacement therapy, aspirin and statins during icu stay. on multivariable logistic regression analysis, the independent predictors of high ffas level were diabetes (aor, 5.36; 95% ci, 1.56, 18.43; p = 0.008), and baseline cholesterol (aor, 4.29; 95% ci, 1.64, 11.19; p = 0.003). figure 1 , panel a shows the serial levels of ffas for patients with high ffas and normal ffas. figure 1, panel b shows ffas level in patients who received permissive underfeeding and standard feeding. the ffas level was not different between the two feeding strategies. on multivariable logistic regression analysis, the independent predictors of high ffas level were diabetes (aor, 5.36; 95% ci, 1.56, 18.43; p = 0.008), and baseline cholesterol (aor, 4.29; 95% ci, 1.64, 11.19; p = 0.003). figure 1 , panel a shows the serial levels of ffas for patients with high ffas and normal ffas. figure 1, panel b shows ffas level in patients who received permissive underfeeding and standard feeding. the ffas level was not different between the two feeding strategies. a) and in patients who received permissive underfeeding and standard feeding (panel b). p values for between-group differences and between-group differences over time are provided using mixed linear model. there was no significant difference in crude mortality between patients with high and normal ffas level ( table 3) . incident of renal replacement therapy was more frequent in patients with high ffas level (6/23 (27.3%) compared to 2/47 (4.7%), p = 0.009). multiple variable analyses adjusting for age, gender, bmi, apache ii, diabetes, triglycerides, ldl cholesterol, hdl cholesterol, non-hdl cholesterol, and medical admissions (vs. non-medical admissions) showed no significant association between high ffas level and 90-day mortality (aor 0.49, 95% ci 0.09, 2.60, p = 0.40) or any other study outcomes (table 3) . . p values for between-group differences and between-group differences over time are provided using mixed linear model. there was no significant difference in crude mortality between patients with high and normal ffas level ( table 3) . incident of renal replacement therapy was more frequent in patients with high ffas level (6/23 (27.3%) compared to 2/47 (4.7%), p = 0.009). multiple variable analyses adjusting for age, gender, bmi, apache ii, diabetes, triglycerides, ldl cholesterol, hdl cholesterol, non-hdl cholesterol, and medical admissions (vs. non-medical admissions) showed no significant association between high ffas level and 90-day mortality (aor 0.49, 95% ci 0.09, 2.60, p = 0.40) or any other study outcomes (table 3) . in this study, we evaluated serum ffas level in critically ill patients. we found that ffas level was elevated at baseline in 32% of patients, and that it was associated with features of the metabolic syndrome. ffas level was not affected by permissive underfeeding versus standard feeding. high ffas level appear to be largely a reflection of the underlying metabolic condition of the patient rather than the critical illness itself. our study provides a characterization of critically ill patients with high ffas level. we found that high ffas level correlated highly with other lipid profile parameters (total, hdl and ldl cholesterol, but not triglycerides) and with age. compared to those with normal ffas level, patients with high ffas level were at baseline significantly older, more likely to be diabetic, had higher hgba1c, blood glucose, creatinine and non-hdl and ldl cholesterol concentrations, and had more hypoxemia (as reflected lower po2: fio2 ratio) despite lack of differences in apache ii scores, sofa scores, and vasopressor demands. during icu stay, patients with high ffas level had increased demand for insulin, disease-specific nutrition therapy, rrt, aspirin, and statins. the differences suggest the association of high ffas level with metabolic syndrome. there was no difference in bmi, between the two groups; however, bmi is known to have its limitations in predicting obesity [22] . because of these differences, we carried out multivariable analyses to account for the confounding effect of some of these variables on clinical outcomes. these analyses show that high ffas level is not associated independently with clinical outcomes. interestingly, patients with high ffas level had less 24-h urinary nitrogen excretion and less negative nitrogen balance. this may be related to lower muscle mass in this older population and more frequent insulin therapy. normally, ffas are elevated during fasting and exercise, and their level drops postprandially after carbohydrate-rich meals. ffas level are elevated in obesity and diabetes [1, 3] . in acute critical illness, where lipolysis increases, serum ffas level increases [6] . our study demonstrated that one-third of critically ill patients had high ffas level; most (63.6%) of these patients were diabetics. we found that baseline cholesterol level and diabetes were independent risk factors for high ffas level on multivariable logistic regression analysis. the effect of propofol on ffas level is uncertain. in an experiment on dogs undergoing general anesthesia, high concentration of propofol (200 and 400 mcg/kg/min) were associated with increased ffas level, although a study in humans undergoing general anesthesia for cardiopulmonary bypass showed that propofol (50 mcg/kg/min) compared to midazolam did not alter serum ffas level [23, 24] . in our study, categorization of patients into high and low ffas level was based on baseline serum specimens, and doses of propofol preceding enrolment were not collected. our study was not designed to specifically address the effect of propofol on ffas level. nevertheless, the doses of propofol that were used during the icu stay were on average much lower than what was used in these studies, and, therefore, the effect of propofol on ffas level in our cohort is likely to be small. the slightly lower dose of propofol given to patients with high-ffas level was likely to be related to being older and more susceptible to sedation. therefore, these patients would normally receive smaller doses of propofol. in addition to being a fuel source, ffas have multiple other physiologic effects. ffas are associated with insulin resistance and impaired glucose metabolism by inhibiting glucose oxidation and by stimulating protein kinase c [1, 10, 25] . they may also stimulate the autophagy of pancreatic beta cells [26] . in our study, patients with high ffas level had higher blood glucose and required more insulin therapy during the icu stay even though the baseline blood glucose level was similar in patients with high and normal ffas, suggesting an association of ffas and insulin resistance in icu patients. ffas may also affect the course of acute critical illness. ffas were found to exacerbate hyperglycemia-induced toll-like receptor expression and activity in monocytic cells, increase superoxide release, enhance nuclear factor-κb activity, and induce the release of proinflammatory factors in diabetics [27] . whether ffas affect inflammation in critically ill patients is less clear. in a porcine endotoxemia model, infusing lipids at two different concentrations was associated with no differences in plasma tumor necrosis factor-α, interleukin6, and leucocytes between animals with low and high ffas suggesting that ffas does not play a significant pro-inflammatory mediator effect [28] . however, ffas have been implicated in the pathogenesis of acute respiratory distress syndrome and has been identified as a prognostic factor for this syndrome. in a lipopolysaccharide-induced acute lung injury model, a 15-fold increase in free oleic acid was observed in bronchoalveolar lavage fluid from mice 8 h after lipopolysaccharide application [20] . the ffa, oleic acid has been demonstrated to be elevated in patients with ards (acute respiratory distress syndrome) and in patients at-risk for ards [11, 29] . patients with sepsis demonstrated a six-fold increase in plasma oleic acid levels compared to healthy volunteers [19] . in addition, ffas are elevated in the blood of patients with sepsis who are at increased risk for ards [30] . the exact mechanism of ffas-associated lung injury is unclear; however, ffas have been shown to increase permeability and to impair transepithelial active sodium transport mechanisms in the lung, and could, thus, promote alveolar edema formation and prevent edema resolution [31] . in our study, where almost all patients were on mechanical ventilation at baseline, hypoxemia was more significant in patients with high ffas compared with patients with normal ffas (median pao2: fio2 ratio was 115 vs. 200, p = 0.02), a finding that may be in line with the association of ffas and lung injury. whether ffas are toxic to the kidneys is unclear. in an animal study, ffas led to severe tubulointerstitial damage [32] . ffas and their metabolites have been implicated in renal cell injury and development of chronic kidney in patients with the metabolic syndrome [33] . in our study, patients with high ffas had a higher rate of new renal replacement therapy, although this association became not significant in multivariable analysis. this finding suggests that the observed crude association may be related to other confounders that put these patients at a higher risk for acute kidney injury; for example, patients with high ffas level were more likely to be diabetics and had higher baseline creatinine compared to patients with normal ffas level. weight loss leads to a lowering of ffas level in the long run and can attenuate ffas-induced hepatic insulin resistance in obese healthy patients [14] [15] [16] . however, the effects of short-term caloric restriction are different. in one study, 11 subjects were fed for two periods of 6 days with hypo-and eucaloric diet with the same macronutrient composition in random order [34] . at 6 days, fasting ffas significantly increased with the hypocaloric diet compared with the eucaloric diet [34] . whether the macronutrient composition affects ffas was investigated in an animal model, and the study found that energy-restricted high-fat versus low-fat diet did not result in different ffa levels [35] . in the current study, serial levels of ffas did not differ with time between patients receiving permissive underfeeding and standard feeding. the study results should be interpreted taking into considerations its strengths and limitations. strength include that data came from a randomized controlled trial, and that serial measurements of ffas were obtained. the limitations include the sample size, which makes the study underpowered to detect a mortality difference. we measured total ffas but not individual levels of each ffas. in addition, the study included patients who had an expected duration of icu stay ≥14 days and, thus, the results may not be generalizable to patients who have a shorter stay. in conclusion, we found that serum ffas level was elevated in almost one-third of critically ill patients. high ffas level was associated with features of the metabolic syndrome and was not affected by short-term moderate caloric restriction. supplementary materials: the following are available online at http://www.mdpi.com/2072-6643/11/2/384/s1, table s1 : pearson correlations among baseline free fatty acids (ffas) level and other related measures of lipid metabolism, figure s1 : flow diagram for patients enrolled in the sub-study of free fatty acids (ffas) level. obesity, insulin resistance and free fatty acids elevation of free fatty acids induces inflammation and impairs vascular reactivity in healthy subjects lipid metabolism, metabolic diseases, and peroxisome proliferator-activated receptors circulating nonesterified fatty acid level as a predictive risk factor for sudden death in the population metabolic response to the stress of critical illness the acute splanchnic and peripheral tissue metabolic response to endotoxin in humans insulin stimulates lipoprotein lipase activity and synthesis in adipocytes from septic rats elevated free fatty acid level is a risk factor for early postoperative hypoxemia after on-pump coronary artery bypass grafting: association with endothelial activation fatty acids trigger mitochondrion-dependent necrosis short-chain fatty acids produced by anaerobic bacteria alter the physiological responses of human neutrophils to chemotactic peptide an increase in serum c18 unsaturated free fatty acids as a predictor of the development of acute respiratory distress syndrome serum free fatty acid concentration in patients with acute pancreatitis fatty acids, obesity, and insulin resistance: time for a reevaluation effect of weight loss and ketosis on postprandial cholecystokinin and free fatty acid concentrations effects of exercise training and diet on lipid kinetics during free fatty acid-induced insulin resistance in older obese humans with impaired glucose tolerance free fatty acid-induced hepatic insulin resistance is attenuated following lifestyle intervention in obese individuals with impaired glucose tolerance permissive underfeeding or standard enteral feeding in critically ill adults a severity of disease classification system use of the sofa score to assess the incidence of organ dysfunction/failure in intensive care units: results of a multicenter, prospective study. working group on "sepsis-related problems" of the european society of intensive care medicine middle east respiratory syndrome coronavirus infection: a short note on cases with renal failure problem is bmi the best measure of obesity? effect of propofol continuous-rate infusion on intravenous glucose tolerance test in dogs propofol as a continuous infusion during cardiopulmonary bypass does not affect changes in serum free fatty acids elevated plasma nonesterified fatty acids are associated with deterioration of acute insulin response in igt but not ngt free fatty acids stimulate autophagy in pancreatic β-cells via jnk pathway free fatty acids in the presence of high glucose amplify monocyte inflammation via toll-like receptors circulating free fatty acids do not contribute to the acute systemic inflammatory response. an experimental study in porcine endotoxaemia plasma fatty acid changes and increased lipid peroxidation in patients with adult respiratory distress syndrome parenteral nutrition with fish oil modulates cytokine response in patients with sepsis oleic acid inhibits alveolar fluid reabsorption: a role in acute respiratory distress syndrome? urinary free fatty acids bound to albumin aggravate tubulointerstitial damage mechanisms of tubulointerstitial injury in the kidney: final common pathways to end-stage renal failure the effects of underfeeding on whole-body carbohydrate partitioning, thermogenesis and uncoupling protein 3 expression in human skeletal muscle energy-restricted high-fat diets only partially improve markers of systemic and adipose tissue inflammation we wish to thank the following those who made valuable suggestions or who have otherwise contributed to the preparation of the manuscript: maram sakhija, turki almoammar, muhammad rafique sohail, shihab mundekkadan and aeron toledo. key: cord-254646-psolkrom authors: matsui, mary s. title: vitamin d update date: 2020-10-14 journal: curr dermatol rep doi: 10.1007/s13671-020-00315-0 sha: doc_id: 254646 cord_uid: psolkrom purpose: the goal of this review is to provide an update in the field of vitamin d, in particular, the role of vitamin d in non-skeletal health, the complexity of providing patient guidance regarding obtaining sufficient vitamin d, and the possible involvement of vitamin d in morbidity and mortality due to sars-cov-2 (covid-19). recent findings: in addition to bone health, vitamin d may play a role in innate immunity, cardiovascular disease, and asthma. although rickets is often regarded as an historical disease of the early twentieth century, it appears to be making a comeback worldwide, including “first-world” countries. broad-spectrum sunscreens (with high uva filters) that prevent erythema are unlikely to compromise vitamin d status in healthy populations. summary: new attention is now focused on the role of vitamin d in a variety of diseases, and more individualized patient recommendation schemes are being considered that take into account more realistic and achievable goals for achieving sufficient vitamin d through diet, supplements, and sun behavior. in the last 10 years, over 41,000 peer-reviewed research studies have been published on vitamin d, and since its original discovery as a sunlight-generated factor important to bone health, a more complex story of vitamin d has continued to evolve. vitamin d, technically not a vitamin, has been linked not just to rickets in children and osteomalacia in adults, but also has been suggested to play a role in diabetes, celiac disease, asthma, atopic dermatitis, tuberculosis, and, most recently, covid-19. this review will briefly summarize fundamental, well-established aspects of vitamin d and human health and then will also discuss (a) some of the most recent work related to vitamin d and non-skeletal-associated health issues; (b) the complexity of establishing meaningful vitamin d measurement metrics and assessing vitamin d status; c)decisionmaking for obtaining vitamin d through diet, supplements, or sun exposure; (d) the impact of skin type, pigmentation, and sunscreen on vitamin d levels; and (e) evidence for a potential influence of vitamin d on the mortality and morbidity of covid-19 through modulation of the pro-inflammatory cytokine response and respiratory response to the virus. some would urge us to move away from the nomenclature of vitamin d as a vitamin and, instead, acknowledge that vitamin d3 is a prohormone produced in skin through ultraviolet irradiation of 7-dehydrocholesterol (7dhc or provitamin d3) [1•] . this previtamin d3 is biologically inert and must undergo thermal isomerization and two hydroxylations. the first hydroxylation occurs in the liver and converts vitamin d to 25-hydroxyvitamin d [25(oh)d], also known as calcidiol. physiologically active 1,25-dihydroxyvitamin d [1, 25(oh) 2d], or calcitriol, is then synthesized primarily in the kidney. the importance of vitamin d for calcium absorption and bone health is undisputed [2, 3] . the classic function of vitamin d is to promote calcium absorption in the gut and maintain adequate serum calcium and phosphate concentrations necessary for normal mineralization of bone. rickets in children and osteomalacia in adults results from insufficient vitamin d [1•] . together with calcium, vitamin d helps protect this article is part of the topical collection on photodermatology older adults from osteoporosis. there is strong, consistent evidence supporting the role of vitamin d in childhood bone health, although most studies have been done in a limited range of skin types, as exemplified in a very recent report showing that high-dose vitamin d supplementation during pregnancy improved bone health in children [4] . in this case, all participants were white and danish. globally, rickets and vitamin d deficiency remain a significant public health problem even in highly developed countries [5••] , despite much research and many government and nongovernmentally funded programs. in europe, little vitamin d is made endogenously in the skin of individuals during the winter months [1•] , and yet vitamin d fortification of foods is largely absent. in the view of many scientists in the vitamin d field, the recommended dietary allowance is too low. recommendations for vitamin d3 at 2000 iu/day are being considered, an intake which should be safe and remain below toxic levels [1•] . factors contributing to a modern "third wave" of rickets have been reviewed recently [5••] from prospective surveillance studies of vitamin d deficiency rickets in australia, canada, and new zealand, as well as multiple retrospective studies from across the globe. in part, this third wave is believed to be caused by reduced uvb exposure due to sun avoidance (sunscreen, clothing, cultural behavior) and a shift toward indoor work. malabsorption syndromes such as celiac disease, short bowel syndrome, gastric bypass, and cystic fibrosis can trigger vitamin d deficiency [6] . medications such as phenobarbital, carbamazepine, dexamethasone, nifedipine, spironolactone, clotrimazole, and rifampin induce hepatic p450 enzymes which can accelerate the degradation of vitamin d and thereby lead to vitamin d deficiency. chronic liver disease and chronic kidney disease increase the risk for vitamin d deficiency due to the dependence on these organs for vitamin d activation. the prevalence of patients with vitamin d deficiency is highest in the elderly, obese, nursing home residents, and hospitalized patients [6] . the issue of vitamin d deficiency in populations who have higher melanin content and/or use extensive skin coverage will be discussed separately. according to the nih, serum concentration of 25(oh)d is the best indicator of vitamin d status. it reflects total vitamin d produced cutaneously as well as obtained through food and supplements and has a fairly long circulating half-life of 15 days. 25(oh)d functions as a biomarker of exposure, but it is not clear to what extent 25(oh)d levels also serve as a biomarker of physiological disease or more subtle conditions or indicate the amount of vitamin d stored in body tissues (primarily adipose tissue). in contrast to 25(oh)d, circulating 1,25(oh)2d is not a good indicator of vitamin d status. levels of 1,25(oh)2d do not typically decrease until vitamin d deficiency is severe. table 1 shows the current serum concentrations and interpretations from the institute of medicine [7• ]. in addition, because the two test protocols used to measure 25(oh)d show some variability among laboratories that conduct the analyses, a standard reference material for 25(oh)d was developed to improve confidence in the result [8] . although from table 1 it would appear that there exists one set of ideal values for serum vitamin d, the situation is somewhat more complex. a slightly alternate rule of thumb is the following [6] : levels of 25(oh)d less than 20 ng/ml indicate vitamin d deficiency, while levels below 30 ng/ml indicate vitamin d insufficiency. levels of 25(oh)d between 30 and 50 ng/ml are generally regarded to be optimal levels; however, a number of variables, including race and age, add complexity [6] . a more thorough discussion of vitamin d assay standardization and recommendation guidelines is provided by sempos and binkley [9] . in any case, the categories of vitamin d sufficiency or deficiency are based on the classic function of vitamin d, bone health. again, based on bone health parameters, vitamin d levels can also be referred to as deficient, insufficient, sufficient, or optimal vitamin d status. the severity of vitamin d deficiency is divided into mild, moderate, and severe. but what about other conditions for which vitamin d may have some modifying actions? there are numerous and conflicting studies that suggest that there may be an association between vitamin d deficiency and cancer, cardiovascular disease, diabetes, autoimmune diseases, asthma, atopic dermatitis, and depression. if indeed, vitamin d status is a modifying factor for these health issues, and it cannot be assumed that the ideal 25(oh)d serum values for prevention of rickets and osteomalacia would prevent or modify these other diseases. it is now recognized that many cells in the body express vitamin d receptors (vdr) which modulate cell proliferation, differentiation, and apoptosis and also regulate gene expression associated with modulation of cell growth, neuromuscular and immune function, and inflammation and may help regulate antimicrobial peptides [10••-14] . several cells involved in immune function express vdr and cyp27b1, which suggests that the active form of vitamin d, 1,25(oh)2d, may control immune function at different levels [10••] . vitamin d is known to regulate parathyroid growth and parathyroid hormone production; it plays a role in the islet cells of the pancreas and may help in suppression of certain autoimmune diseases and some cancers. there are many suggestions that deficiencies in vitamin d levels are linked to conditions such as rheumatoid arthritis, multiple sclerosis, alzheimer's disease, and schizophrenia (for reviews, see [10••, 13••]). prospective studies have reported inverse correlations between 25(oh)d serum levels and cardiovascular disease, serum lipids, inflammation, disorders of glucose metabolism, weight gain, mood disorders, declining cognitive function, and alzheimer's disease. however, no effect of vitamin d supplementation has been demonstrated for these outcomes, so the proximal relationship could be genetic, cultural behavior, or other confounding. first suggested over 30 years ago, vitamin d is now often referred to as a "wellknown" regulator of innate immunity [10••] . the apparent mechanism for this is vitamin d enhancement of defensin β2 and cathelicidin antimicrobial peptide (camp) production, increasing their antimicrobial activity [15] . a thorough review of vitamin d's regulatory function in both innate and acquired immunity can be found in wei and christakos [15] . the biologically active form of vitamin d, 1,25(oh)2d, modulates innate and adaptive immunity via regulation of at least 15 genes by the vdr [16] . 1,25(oh)2d upregulates camp not only in monocytes/ macrophages but also in other cells participating in the innate immune system. that said, there are insufficient data at this time to recommend any specific vitamin d dosage or treatment regimen for asthma, autoimmune diseases, multiple sclerosis, or other conditions. there is confusing and sometimes contradictory evidence for the therapeutic use of vitamin d to treat nonskeletal conditions. for example, although treatment with 1,25(oh)2d was shown to be effective in reducing respiratory infections in asthma patients, and suboptimal serum 25(oh)d in childhood may have adverse effects on tuberculosis and asthma, vitamin d supplementation in pregnancy does not appear to prevent school-age asthma [17] . in a study on vitamin d levels and susceptibility to asthma and atopic dermatitis, no evidence was found for genetically determined low 25(oh)d levels to be linked to an increased risk of either conditions [18] . initial reports indicate that vitamin d and omega-3 supplements failed to reduce risk of cancer or heart events in a 5-year trial of nearly 26,000 healthy us adults [19] . this is, in part, why there is currently active discussion over both the recommended levels of vitamin d as well as the testing methods used to assess adequacy for health and prevention of diseases. recommendations for optimum vitamin d levels, for obtaining a healthy level of vitamin d and even for assessment methods, are currently not uniform across the globe. to some extent, this is because relevant factors vary: vitamin d food fortification regulations, the strength of ambient ultraviolet radiation (uvr), levels of smog, culture and ethnicity, skin phototype, chronological age, and ease and accuracy of specific clinical laboratory measurements. for example, some geographic regions with strong sun have a very high prevalence of rickets [20] . the american academy of dermatology (aad) is regarded as a primary resource for good practice in the field of skin health and related medical specialties. the aad position statement on vitamin d can be accessed online [21] and lists the recommended dietary allowance (rda) and the upper limit values for both calcium and vitamin d intake that should cover "97.5% of the normal healthy population." the optimum vitamin d intake varies by age: for children under 1 year, the rda is 400 iu/day; between 1 year and 70 years, the rda is 600 iu/day; and for people over 70, the rda is 800 iu/day. these rda guidelines are based on aad guidelines that promote minimal or no sun exposure primarily due to the risk of skin cancer associated with sun exposure. the american cancer society also does not support increasing vitamin d levels through sun exposure. the cancer council of australia has slightly eased its sun protection message concerning sunscreen, outdoor clothing, and hats. protection from the sun is currently recommended by the world health organization when the uvr index is ≥ 3 [22] . in general, the paradigm accepted is that an initial linear dose-response relationship exists between exposure to uv radiation and change in concentration of 25(oh)d, followed by a plateau with continuing exposures over a longer period of time. because uvr from the sun and tanning beds can lead to the development of skin cancer, sun exposure (natural) or indoor tanning (artificial) is strongly discouraged. instead, it is recommended that vitamin d be supplied from a "healthy diet," which includes naturally enriched vitamin d foods, fortified foods and beverages, and/or vitamin supplements while practicing sun protection. unfortunately, it is almost impossible to achieve a satisfactory vitamin d intake from diet alone, since it is found at significant levels in only a few commonly consumed foods. fat-soluble molecule is found almost exclusively in oily fish such as salmon, sardines, herring and mackerel, liver, egg yolks, and fortified foods. since dietary sources are unlikely to be sufficient, especially for vegetarians and vegans, supplements are necessary to obtain the rda. the us department of agriculture's (usda's) websites (https:// fdc.nal.usda.gov/index.html.) can be searched for recommended healthy eating guidelines, the nutrient content of many foods, and a list of foods containing vitamin d. in contrast, one comprehensive review [6] concluded that 30 min of sunshine daily with over 40% of skin surface area is required to prevent vitamin d deficiency (although surface area numbers seem to vary greatly). the difficulty of promulgating simple blanket recommendations as pathways to optimal vitamin d status is well described by macdonald et al. [23] . this same reference describes a survey of multiple ethnic groups which demonstrated the difficulty (if not impossibility) of reaching optimal vitamin d levels if no sun exposure is considered safe. most women would find it almost impossible to achieve satisfactory vitamin d status if they had to rely on their current diets. other measures including fortification of additional foods may have to be considered. several different vitamin d fortification programs have been initiated across the globe [24] . in contrast to the aad guidelines that advise against any sun exposure, recommendations from the national health service of the uk [25] states this: "from about late march/early april to the end of september, most people should be able to get all the vitamin d we need from sunlight." clearly, one issue that is immediately apparent is the seasonality of vitamin d levels in certain latitudes. this brings up other key questions in the world of vitamin d. is there such a thing as "sensible" sun exposure? are vitamin d guidelines a "one size fits all" or should they be more nuanced to respect the range of human pigmentation and culture? for example, the recommendations and sufficiency numbers do not take into account an almost complete lack of data from africa and south america [13••] . one recent editorial goes so far as to say that "vitamin d guidelines development is in a state of paralysis" due to a lack of agreement on the parameters for the terms insufficiency, sufficiency, and toxicity based on 25(oh)d concentrations [9] . further, the commentary urges that a set of recommendations based on work of the vitamin d standardization program (vdsp) paves the way for development of rational universal vitamin d status guidelines. although it is believed that most vitamin d (about 80%) is acquired from solar exposure, sunscreen use is an important tool in prevention of skin cancer and is strongly recommended by the dermatology community. uvb wavelengths (280-320 nm) are absorbed by dna and result in direct damage in the form of cyclobutane pyrimidine dimers (cpds) and other mutagenic events that ultimately lead to nonmelanoma skin cancer and melanoma. uva wavelengths (320-400 nm) generate oxidative stress which is associated primarily with photoaging but can also promote skin cancer. unfortunately, the same uvb wavelengths absorbed by dna and that cause dna mutations are also those that induce photoconversion of 7-dehydrocholesterol, to previtamin d3. one of the most important questions to ask then, with regard to the issue of sun avoidance and vitamin d, is directed toward the balance between (a) sunscreen use and uvr generation of vitamin d and (b) the efficacy of sunscreen to prevent uvr-induced dna damage. a consensus review published in 2019 [26] concluded that broad-spectrum sunscreens (with high uva filters) that prevent erythema are unlikely to compromise vitamin d status in healthy populations and that daily and recreational photoprotection does not compromise vitamin d synthesis. vitamin d screening should be used to monitor patients with photosensitivity disorders, who require the most rigorous photoprotection combined with vitamin d supplements [27••] . improved vitamin d status by uvr is always associated with the possibility of higher dna damage and skin cancers; however, production of vitamin d may be optimized and skin dna damage minimized, by increasing the body surface area exposed and decreasing the uvb dose per unit area [13••] . one group has examined the relationship between sunscreen use and vitamin d production in some detail [26, 28••, 29] . a study of sunscreen use in polish volunteers on vacation in tenerife had both interventional and observational groups and was able to monitor the effect of two spf 15 sunscreens over 1 week [26, 29] . one sunscreen had a high uva protection factor (uva-pf), while the second had low uva protection. the authors were able to make the important conclusion that sunscreens may be used to prevent sunburn yet allow vitamin d synthesis, although with the caveat that the absence of sunburn does not necessarily mean the absence of dna mutations [30] . however, concern about unrepaired mutagenic dna lesions should be tempered by evidence that efficient repair occurs within 12 to 48 h, although there is significant individual variation in this ability [30] . in the tenerife study, both high uva and low uva sunscreens prevented sunburn, but subjects using the high uva sunscreen had higher levels of serum 25(oh)d at the conclusion of the study. these findings were attributed to the fact that high uva sunscreen allows transmission of more uvb than low uva sunscreen. another very important study on sun exposure and vitamin d levels followed polish children over 12 days at a baltic sea summer camp [31] . relatively low daily uv radiation doses resulted in a modest but significant improvement in 25(oh)d (24%) but a very much greater increase in cpd (1162%). dna damage was worse in skin types i/ii. this conundrum is reflected in a recent global consensus on rickets prevention that was unable to recommend a safe uvr exposure level to enhance vitamin d status [32] . it has been argued, following in vitro assays, that vitamin d itself reduces the risk of skin cancer by several mechanisms [33] [34] [35] . under laboratory conditions, dna photolesions can be reduced in irradiated skin cells treated with 1,25(oh)2d. one mechanism for this may relate to the increases in p53 and nucleotide excision repair observed within hours after uvr exposure in keratinocytes and melanocytes treated with 1,25(oh)2d (1 or 10 nm) or vitamin d analogs. there was a corresponding reduction in cpds in uv-irradiated skin cells treated with vitamin d compounds. the indirect dna damage and the reduction in dna repair that is normally caused by nitric oxide products may also be reduced by vitamin d compounds. a group in cleveland reported data from an exploratory study in which high oral doses of vitamin d3 resulted in a sustained reduction in skin redness after experimental sunburn, as well as less epidermal structural damage, reduced expression of pro-inflammatory markers in the skin, and a gene expression profile characterized by upregulation of skin barrier repair genes [11] . one barrier to establishing recommendations for sensible sun exposure is the impact of human skin pigmentation and genetic polymorphisms on the response to uvr. these are still largely unknown variables. a careful literature review of vitamin d and the impact of pigmentation was published in 2015 [36•] and noted that a better understanding of how and how much melanin influences vitamin d photosynthesis is critical to meaningful public health messages. the research is difficult to integrate due to variations in study methodology, including the source, dose and frequency of uv irradiation, phototype classification, measurement methods for vitamin d, and the lack of information on relevant genetic polymorphisms. however, on balance, the review team concluded that m o r e h i g hl y p i g m e n t e d s ki n h as l es s e f f e ct i v e photoproduction of vitamin d and 25-hydroxyvitamin d. the ratio of sun exposure to pigmentation level to achieve vitamin d sufficiency remains uncertain. a recent 4-week study of 71 school children [37] controlled for time spent outdoors and assessed the contribution of clothing coverage, initial vitamin d levels, skin color, and other variables to the serum vitamin d levels at the conclusion of the study. the subjects were fitzpatrick skin phototypes iv and v with a range of melanin levels. for all subjects, there was a significant increase in serum 25(oh)d at the conclusion of the study compared with baseline, and a greater increase in serum vitamin d was seen in children with the lowest initial values. melanin levels were inversely correlated with the increase in vitamin d levels. in a study of racially diverse children in boston, most of the variability in 25(oh)d was correlated with constitutive skin color [38] . the correlation between skin color and serum levels of 25(oh)d is usually attributed to the photoprotective properties of melanin. however, there may be additional variables in play which may influence the ultimate health consequences. in a recent study, black americans (n = 2085) had lower concentrations of 25 (oh)d, but also lower concentrations of vitamin d binding protein, than white americans. the consequence of this was similar (calculated) levels of bioavailable 25(oh)d. this may explain why people of color with low total 25(oh)d had a higher average bone mineral density than the white group with similar 25(oh)d concentrations. furthermore, it may imply that bone health is more accurately reflected by the concentration of bioavailable, rather than total, 25(oh)d. taken together, the limited data available suggests that there should be some caution about attributing the observed lower levels of vitamin d in skin of color (soc) simply to a blocking effect of melanin. a very recent rigorously performed and analyzed study was just published that indicates melanin to have less impact on vitamin d levels than previously thought, possibly due to the spatial positions of melanin and 7-dehydrocholesterol in human skin [39••] . a brief review of confounding related to the question of pigment, uvrrelated skin cancers, and serum vitamin d levels can also be found in a 2015 review [13••] . the role of vitamin d as a possible modulator of susceptibility, morbidity, and mortality in sars-cov-2 (covid19) that vitamin d should play a role in covid-19 is not unexpected, since there has been believed to be a link between vitamin d deficiency and respiratory disease for over 100 years. observational studies have reported independent associations between low serum concentration of 25hydroxyvitamin d and susceptibility to acute respiratory tract infections [40] . vitamin d deficiency is associated with an increased risk of the acute respiratory distress syndrome (ards), intensive care admission, and mortality in patients with pneumonia [41] . in a systematic review and metaanalysis of 25 randomized controlled studies, martineau et al. found that vitamin d protected against acute respiratory tract infection overall [42] . patients who were severely vitamin d deficient experienced the most benefit. subgroup analysis revealed that daily or weekly vitamin d supplementation without additional bolus doses protected against acute respiratory tract infection, whereas regimens containing large bolus doses did not. among those receiving daily or weekly vitamin d, protective effects were strongest in those with profound vitamin d deficiency at baseline, although those with higher baseline 25-hydroxyvitamin d concentrations also experienced benefit. the interaction of vitamin d status and susceptibility to sars-cov-2 is currently being investigated [43-45••] . most immune cells express the vdr and actively convert 25(oh)d into 1,25(oh)2d, its active form. vdr signaling has a suppressive role on autoimmunity and an antiinflammatory effect, promoting dendritic cell and regulatory t cell differentiation and reducing th 17 cell response and inflammatory cytokine secretion (with relevance to the covid-19 induced cytokine storm). also mentioned earlier, vitamin d is thought to have a regulatory effect on innate immunity. there is no general consensus on the desired level of 25(oh)d to achieve immunomodulatory effects; thus, there is no current indication for vitamin d supplementation in specific infections and/or autoimmune diseases. however, dr. anthony fauci, director of the national institute of allergy and infectious diseases, in a recent interview by jama, agreed that "if you are vitamin d deficient, you might have a poor outcome or a greater chance of getting into trouble with an infection" (jama medical news & perspectives, june 8, 2020). the most vulnerable group with respect to covid-19, the aging population, also has a high proportion of individuals with deficient vitamin d levels. further studies have been both proposed (at least one clinical trial is registered with the nih) and initiated to clarify the contribution of vitamin d levels to susceptibility and the severity of response to covid-19 as well as the use of vitamin d supplementation as a possible therapeutic agent [43••-45••]. vitamin d deficiency is a major global public health issue with vitamin d deficiency rickets at significant levels even in highly developed countries. about 1 billion people worldwide have vitamin d deficiency, while 50% of the population has vitamin d insufficiency. the fact that exposure of the skin to uvb radiation, wavelengths that cause mutations in the dna of epidermal cells, is also the source of vitamin d is problematic. can the average person finely calibrate 0.2 minimal erythema dose, the point at which vitamin d synthesis occurs without perceptible dna damage? for relevant nonskeletal conditions or diseases such as multiple sclerosis, alzheimer's disease, or even ards, it is unknown whether low vitamin d status causes the disease or the disease causes the low vitamin d status. communication with patients about vitamin d calls for judgment and individualization in patient care, including more nuanced advice about sun exposure. human and animal rights and informed consent this article does not contain any studies with human or animal subjects performed by any of the authors. this is a generally helpful primer on the classic functions and understanding of vitamin d vitamin d and health: the need for more randomized controlled trials serum cholecalciferol may be a better marker of vitamin d status than 25-hydroxyvitamin d effect of high-dose vs standard-dose vitamin d supplementation in pregnancy on bone mineralization in offspring until age 6 years: a prespecified secondary analysis of a double-blinded, randomized clinical trial a brief history of nutritional rickets vitamin d deficiency medicine committee to review dietary reference intakes for vitamin d, calcium. the national academies collection: reports funded by national institutes of health nist releases vitamin d standard reference material. nist 25-hydroxyvitamin d assay standardisation and vitamin d guidelines paralysis vitamin d: nutrient, hormone, and immunomodulator oral vitamin d rapidly attenuates inflammation from sunburn: an interventional study control of cutaneous antimicrobial peptides by vitamin d3 the consequences for human health of stratospheric ozone depletion in association with other environmental factors cyp11a1 in skin: an alternative route to photoprotection by vitamin d compounds mechanisms underlying the regulation of innate and adaptive immunity by vitamin d key vitamin d target genes with functions in the immune system vitamin d supplementation in pregnancy does not prevent school-age asthma vitamin d levels and susceptibility to asthma, elevated immunoglobulin e levels, and atopic dermatitis: a mendelian randomization study study questions the benefits of vitamin d and omega 3 supplements. cardiosmart news american college of cardiology the effect of high-dose postpartum maternal vitamin d supplementation alone compared with maternal plus infant vitamin d supplementation in breastfeeding infants in a high-risk population. a randomized controlled trial o / dfff5f47b0dddfb82676270505afd09f/ps-vitamin_d_postition_ statement.pdf 22. ultraviolet (uv) index. world health organization sunlight and dietary contributions to the seasonal vitamin d status of cohorts of healthy postmenopausal women living at northerly latitudes: a major cause for concern? vitamin d microencapsulation and fortification: trends and technologies how to get vitamin d from sunlight sunscreen photoprotection and vitamin d status an important discussion of study protocol variables and their influence on policy sub-optimal application of a high spf sunscreen prevents epidermal dna damage in vivo optimal sunscreen use, during a sun holiday with a very high ultraviolet index, allows vitamin d synthesis without sunburn kinetics of uv light-induced cyclobutane pyrimidine dimers in human skin in vivo: an immunohistochemical analysis of both epidermis and dermis children sustain high levels of skin dna photodamage, with a modest increase of serum 25-hydroxyvitamin d3, after a summer holiday in northern europe global consensus recommendations on prevention and management of nutritional rickets vitamin d and death by sunshine protective effects of 1,25 dihydroxyvitamin d3 and its analogs on ultraviolet radiation-induced oxidative stress: a review photoprotective properties of vitamin d and lumisterol hydroxyderivatives a systematic review of the influence of skin pigmentation on changes in the concentrations of vitamin d and 25-hydroxyvitamin d in plasma/serum following experimental uv irradiation impact of solar ultraviolet b radiation (290-320 nm) on vitamin d synthesis in children with type iv and v skin association of serum 25-hydroxyvitamin d with race/ethnicity and constitutive skin color in urban schoolchildren melanin has a small inhibitory effect on cutaneous vitamin d synthesis: a comparison of extreme phenotypes vitamin d(3) supplementation reduces the symptoms of upper respiratory tract infection during winter training in vitamin d-insufficient taekwondo athletes: a randomized controlled trial vitamin d deficiency contributes directly to the acute respiratory distress syndrome (ards) vitamin d supplementation to prevent acute respiratory tract infections: systematic review and meta-analysis of individual participant data evidence that vitamin d supplementation could reduce risk of influenza and covid-19 infections and deaths covid-19 and vitamin d-is there a link and an opportunity for intervention? letter: covid-19, and vitamin d publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord-016300-vw11c2wt authors: jain, kewal k. title: biomarkers of pulmonary diseases date: 2017-09-18 journal: the handbook of biomarkers doi: 10.1007/978-1-4939-7431-3_16 sha: doc_id: 16300 cord_uid: vw11c2wt lungs and airways are affected by several pathologies, the most important of which are inflammation, infection and cancer. some of the biomarkers of these pathologies are similar to those found in involvement of other organs. this chapter will briefly discuss general issues of biomarkers of pulmonary disorders listed in table 16.1. biomarkers of lung cancer are described in chapter 13. lungs and airways are affected by several pathologies, the most important of which are inflammation, infection and cancer. some of the biomarkers of these pathologies are similar to those found in involvement of other organs. this chapter will briefly discuss general issues of biomarkers of pulmonary disorders listed in table 16 .1. biomarkers of lung cancer are described in chapter 13. low lung function is associated with increased morbidity and mortality. it is therefore of interest to identify biomarkers that are associated with impaired lung function. lung function (fev1 and fvc) and a panel of 15 inflammatory biomarkers (including cytokines, chemokines, adhesion molecules, crp and wbc count) from blood samples were analysed subjects aged 70 years (kuhlmann et al. 2013) . wbc count, crp and vcam-1 were found to relate to poorer lung function. a doserelated association was found for the combination wbc count and crp towards fev1 and wbc and vcam-1 towards fvc. this indicates that combination of two biomarkers yielded more information than assessing them one by one when analysing the association between systemic inflammation and lung function. oxidative stress is the hallmark of various chronic inflammatory lung diseases. increased concentrations of ros in the lungs of such patients are reflected by elevated concentrations of oxidative stress markers in the breath, airways, lung tissue and blood. traditionally, the measurement of these biomarkers has involved invasive procedures to procure the samples or to examine the affected compartments, to the patient's discomfort. non-invasive approaches to measure oxidative stress have been investigated. the collection of exhaled breath condensate (ebc) is a noninvasive sampling method for real-time analysis and evaluation of oxidative stress biomarkers in the lower respiratory tract airways. the biomarkers of oxidative stress such as h 2 o 2 , f2-isoprostanes, malondialdehyde, 4-hydroxy-2-nonenal, antioxidants, glutathione and nitrosative stress such as nitrate/nitrite and nitrosated species can be measured in ebc. oxidative stress biomarkers also have been measured for various antioxidants in disease prognosis. ebc is currently used as a research and diagnostic tool in free radical research, yielding information on redox disturbance and the degree and type of inflammation in the lung. it is expected that ebc can be exploited to detect specific levels of biomarkers and monitor disease severity in response to treatment. community-acquired pneumonia (cap) is one of the most common reasons for emergency department. despite its prevalence, there are many challenges to proper diagnosis and management of pneumonia. there is no accurate and timely gold standard to differentiate bacterial from viral disease, and there are limitations in precise risk stratification of patients to ensure appropriate site-of-care decisions. clinical risk scores such as pneumonia severity index (psi) and curb-65 (confusion, urea, respiratory rate, blood pressure, age > 65 years), and blood biomarkers of different physiopathological pathways are used in predicting longterm survival in patients with cap. in a prospective study, patients admitted with cap were followed for 6 years and cox regression models as well as area under the receiver operating characteristics curve (auc) were used to investigate associations between initial risk assessment and all-cause mortality (alan et al. 2015) . initial psi and curb-65 scores both had excellent long-term prognostic accuracy, with a step-wise increase in mortality per risk class. the addition of inflammatory (pro-adrenomedullin) and cardiac (pro-atrial natriuretic peptide) blood biomarkers measured upon hospital admission further improved the prognostic capabilities of the psi. pathological changes in severe acute respiratory syndrome (sars) suggest that sars sequelae are associated with dysregulation of cytokine and chemokine production. a study from taiwan showed that cytokine or chemokine profiles in patients with sars differ markedly from those in patients with community-acquired pneumonia (cap) and control groups (chien et al. 2006) . serum levels of three cytokines were significantly elevated in sars patients versus the cap: interferon-γ-inducible protein-10 (ip-10), interleukin (il)-2, and il-6. cytokine levels began to rise before the development of chest involvement and peaked earlier than did lung injury assessed by chest x-ray. conversely, in cap patients but not sars patients or controls, levels of interferon-γ, il-10, and il-8 were elevated, and rose in tandem with radiographic changes. a further difference between groups was the ratio of il-6 to il-10, at 4.84 in sars patients versus 2.95 in cap patients. however, in both sets of patients, levels of il-6 correlated strongly with the severity of lung injury. the early induction of ip-10 and il-2, as well as the subsequent overproduction of il-6 and lack of il-10, probably contribute to the main immunopathological processes involved in sars lung injury and may be early biomarkers of lung injury. these findings differ from those observed in subjects with cap. plasma biomarkers related to inflammation − il-8 and enhanced neutrophil recruitment to the lung (icam-1) − are independently associated with increased mortality in patients with ali. higher levels of il-8 and icam-1 independently predicted death (mcclintock et al. 2008 ). in addition, lower levels of the coagulation marker protein c were independently associated with an increased risk of death. the association of lower protein c levels with non-survivors continues to support the role for disordered coagulation in ali/ards. these associations exist despite consistent use of lung protective ventilation and persist even when controlling for clinical factors that also impact upon outcomes. the two biomarkers with an independent association with mortality, il-8 and icam-1, need to be be studied further for their potential value in stratifying patients in clinical trials. acute respiratory distress syndrome (ards) is the rapid onset of respiratory failure − the inability to adequately oxygenate the blood − that often occurs in the critically ill. acute lung injury (ali) precedes ards as severe respiratory illnesses progress. both conditions can be life-threatening. in a large-scale, multicenter trial of patients with ards or ali, higher levels of nitric oxide (no) in urine were strongly associated with improved survival, more ventilator-free days, and decreased rates of organ failure (mcclintock et al. 2007 ). the authors speculated that no has a beneficial effect on ali since it scavenges oxygen free radicals that are generated during oxidative stress. since no increases microcirculation, it helps to better perfuse tissue beds in the lungs. the investigators offered an alternative hypothesis to explain their findings: no created inside the body may have a beneficial effect on organs other than the lung during ali. it might help prevent further tissue damage by improving oxygen and nutrient delivery to the tissues, while helping to decrease the amount of toxic oxygen species. the authors also speculated that no might have antibacterial effects that could be important in infectious conditions that predispose patients to ali. pulmonary surfactant, a complex of lipids and proteins, functions to keep alveoli from collapsing at expiration. surfactant proteins a (sp-a) and d (sp-d) belong to the collectin family and play pivotal roles in the innate immunity of the lung. pulmonary collectins directly bind with broad specificities to a variety of microorganism and possess antimicrobial effects. these proteins also exhibit both inflammatory and antiinflammatory functions. the collectins enhance phagocytosis of microbes by macrophages through opsonic and/or non-opsonic activities. the proteins stimulate cell surface expression of phagocytic receptors including scavenger receptor a and mannose receptor. since the expression of sp-a and sp-d is abundant and restricted within the lung, the proteins are now clinically used as biomarkers for lung diseases. the levels of sp-a and sp-d in bronchoalveolar lavage fluids, amniotic fluids, tracheal aspirates and pleural effusions reflect alterations in alveolar compartments and epithelium, and lung maturity. the determination of sp-a and sp-d in sera is a noninvasive and useful tool for understanding some pathological changes of the lung in the diseases, including pulmonary fibrosis, collagen vascular diseases complicated with interstitial lung disease, pulmonary alveolar proteinosis, acute respiratory distress syndrome and radiation pneumonitis (takahashi et al. 2006) . interstitial lung disease (ild) is defined as restrictive lung function impairment with radiographic signs of ild. kl-6, a mucinous high-molecular weight glycoprotein, is expressed on type ii pneumonocytes and is a potential biomarker of ild. a retrospective, cross-sectional analysis caucasian patients with polymyositis (pm) or dermatomyositis (dm) and ild were shown to have elevated serum levels of kl-6 compared to patients without ild (fathi et al. 2012) . at a cut-off level of 549 u/ml, the sensitivity and specificity for diagnosis of ild was 83% and 100%, respectively. the level of serum kl-6 may serve as measure of ild in patients with pm/dm, and is a promising biomarker for use in clinical practice to assess response to treatment. chronic obstructive pulmonary disease (copd) consists of two main forms − chronic bronchitis and emphysema − and sufferers usually have a combination of these conditions. there has been increasing interest in using pulmonary biomarkers to understand and monitor the inflammation in the respiratory tract of patients with copd. bronchial biopsies and bronchoalveolar lavage provide valuable information about inflammatory cells and mediators, but these procedures are invasive, so that repeated measurements are limited. sputum provides considerable information about the inflammatory process, including mediators and proteinases in copd, but samples usually represent proximal airways and may not reflect inflammatory processes in distal bronchi. analysis of exhaled breath is a noninvasive procedure so that repeated measurements are possible, but the variability is high for some assays. there is relatively little information about how any of these biomarkers relate to other clinical outcomes, such as progression of the disease, severity of disease, clinical subtypes or response to therapy. more information is also needed about the variability in these measurements. in the future pulmonary biomarkers may be useful in predicting disease progression, indicating disease instability and in predicting response to current therapies and novel therapies, many of which are now in development. the copd foundation biomarker qualification consortium (cbqc) is a unique public-private partnership established in 2010 between the copd foundation, the pharmaceutical industry, and academic copd experts with advisors from the us national heart lung & blood institute and fda (miller et al. 2016) . the initial intent of the cbqc was to integrate data collected in 2009 and submit a dossier for the qualification. this led to the fda qualification of plasma fibrinogen as a prognostic or enrichment biomarker for all-cause mortality and copd exacerbations in 2015. it is the first biomarker drug development tool qualified for use in copd under the fda's drug development tool qualification program. alpha1-antitrypsin (aat) is a plasma glycoprotein that inhibits neutrophil elastase, and individuals who inherit altered aat genes resulting in deficiency of the protein are at high risk for copd and liver cirrhosis. this deficiency can be detected by serum protein pattern studies. in the past, testing for the deficiency has been done retrospectively in patients with copd or liver disease, but the introduction of a home-administered finger-stick blood spot test for aat genotype enables affected families to construct pedigrees to enable them to identify children who are at risk for developing copd in later life and should avoid exposure to dust and smoke. extracellular matrix (ecm) remodeling of the lung tissue releases protein fragments into the blood, where they may be detected as serologic surrogate biomarkers of disease activity in copd. association of ecm turnover with severity and outcome of copd has been assessed in a prospective, observational, multicenter study, global initiative for chronic obstructive lung disease grades ii to iv, and serum samples were analyzed at stable state, during exacerbation as well as 4 weeks after exacerbation (stolz et al. 2017) . results showed that patients with the lowest levels of pro-forms of collagen type iii (pro-c3) and type vi (pro-c6) had more severe airflow limitation, hyperinflation, air trapping, and emphysema. collagen type iii (c3m) and collagen type vi (c6m) were associated with dyspnea. in conclusion, serum biomarkers of ecm turnover were significantly associated with disease severity and clinically relevant outcomes in patients with copd. lung ecm remodeling in healthy controls and copd patients was investigated in the copdgene study. the data suggest that type vi collagen turnover and elastin degradation by neutrophil elastase are associated with copd-induced inflammation (eosinophil-bronchitis) and emphysema (bihlet et al. 2017) . serological assessment of type vi collagen and elastin turnover may assist in identification of phenotypes likely to be associated with progression and amenable to precision medicine for clinical trials. lung failure, also termed "lung attack", is the most common organ failure seen in the intensive care unit. lung attacks, which effect individuals with copd are among the leading cause of visits to emergency rooms among chronic disease sufferers. other causes are neuromuscular impairment, pulmonary edema, pneumonia, and vascular diseases such as acute or chronic pulmonary embolism. when a patient is admitted into the hospital with a severe lung failure, it usually takes >3 months to get to 80% of his or her baseline health. if the patient's health is poor to start with, the new attack can be devastating or even fatal. a test that could more accurately present a patient's disease could make it easier to predict and treat copd progression to lung failure. there is need for a test that could be performed in any clinical lab and could be used far more widely than the current lung function tests, which are performed in certain centers by specially trained personnel. in 2012, canada's prevention of organ failure (proof) center of excellence in vancouver received funding from genome british columbia to develop a biomarkerbased test for determining a copd patient's risk for having a lung attack. genes and protein biomarker sets that have been discovered at proof center could have the ability to predict copd-caused lung attacks and need to be validated. circulating bnp levels were evaluated as a parameter for the presence and severity of pulmonary hypertension (ph) in patients with chronic lung disease (leuchte et al. 2006) . during a follow-up time of approximately 1 year, significant pulmonary hypertension (mean pulmonary artery pressure > 35 mm hg) was diagnosed in more than one-fourth of patients and led to decreased exercise tolerance and life expectancy. elevated bnp concentrations identified significant pulmonary hypertension with a sensitivity of 0.85 and specificity of 0.88 and predicted mortality. moreover, bnp served as a risk factor of death independent of lung functional impairment or hypoxemia. it is concluded that plasma bnp facilitates noninvasive detection of significant ph with high accuracy and can be used as a screening test for the presence of ph. in addition, bnp enables an assessment of the relevance of ph and could serve as a useful prognostic parameter in chronic lung disease. a study has revealed that serum levels of the neuroendocrine activity biomarker chromagranin a (cga) are increased in male smokers with impaired lung function, and are associated with both respiratory symptoms and the degree of airway obstruction (sorhaug et al. 2006) . the subgroup of airway epithelial cells belonging to the diffuse neuroendocrine system, termed pulmonary neuroendocrine cells, may represent a putative regulatory function of cga as a prohormone. they are considered to control growth and development of the fetal lung and regulation of ventilation and circulation, but may also have a role in the pathogenesis of smoking-induced airway disease. the findings indicate that neuroendocrine activation may be important in smoking-related airway inflammation and remodeling, and raise the possibility that cga could be of predictive value as a biomarker of prognosis in smoking-associated diseases. measurements of c-reactive protein (crp), a biomarker of inflammation, provide incremental prognostic information beyond that achieved by traditional biomarkers in patients with mild to moderate copd, and may enable more accurate detection of patients at a high risk of mortality. lung function decline is significantly related to crp levels, with an average predicted change in fev1 of −0.93% in the highest and 0.43% in the lowest quintile. however, respiratory causes of mortality are not significantly related to crp levels. genome-wide expression profiling of peripheral blood samples from subjects with significant airflow obstruction was performed to find non-invasive gene expression biomarkers for copd (bhattacharya et al. 2011) . correlation of gene expression with lung function measurements identified a set of 86 genes. a total of 16 biomarkers showed evidence of significant correlation with quantitative traits and differential expression between cases and controls. further comparison of these peripheral gene expression biomarkers with those previously identified from lung tissue of the same cohort revealed that two genes, rp9 and nape-pld, were decreased in copd cases compared to controls in both lung tissue and blood. these results contribute to our understanding of gene expression changes in the peripheral blood of patients with copd and may provide insight into potential mechanisms involved in the disease. patients with copd are often at high risk of early death and identification of prognostic biomarkers may aid in improving their survival by providing early intensive therapy for high-risk patients. a study has investigated the prognostic role of hyperuricemia at baseline on the prognosis of patients with copd by retrospective evaluation of data . hyperuricemia was found to be not associated with other baseline characteristics in patients with copd. kaplan-meier survival curve showed that patients with copd with hyperuricemia had higher risk of mortality compared with patients with normouricemia. thus, hyperuricemia is a promising biomarker of early mortality in patients with copd. decreased expression of vascular endothelial growth factor (vegf) and its receptor has been implicated in the pathogenesis of copd. levels of placenta growth factor (plgf), another angiogenic factor, are increased in the serum and bronchoalveolar lavage (bal) fluid of patients with copd and are inversely correlated with fev1 (cheng et al. 2008) . serum levels of plgf in patients with copd were more than double those in smokers and nonsmokers without copd. these findings suggest that bronchial epithelial cells can express plgf, which may contribute to the pathogenesis of copd. both plgf and vegf expression levels were increased in cultured bronchial epithelial cells exposed to pro-inflammatory cytokines such as tnfα and il-8. although the mechanisms underlying the observed detrimental effects of plgf remain to be clarified, persistent plgf expression might have adverse effects on lung parenchyma by down-regulating angiogenesis. although the aim of management of patients with asthma is to control their symptoms and prevent exacerbations and morbidity of the disease, optimal management may require assessment and monitoring of biomarkers, i.e., objective measures of lung dysfunction and inflammation. clinical observations suggest that rhinovirus infection induces a specific inflammatory response in predisposed individuals that results in worsened asthmatic symptoms and increased airway inflammation. a study has shown that ifn-γinduced protein (ip)-10 is specifically released in acute virus-induced asthma, and can be measured in the serum to predict a viral trigger of acute exacerbations (wark et al. 2007) . primary bronchial epithelial cell models of rhinovirus infection were used to identify mediators of rhinovirus infection and responded to infection with rhinovirus-16 by releasing high levels of ip-10, rantes, and il-16, as well as smaller amounts of il-8 and tnf-α. ip-10, perhaps in combination with tnf-α, might be a useful clinical marker to identify rhinovirus and other virus-induced acute asthma. additional findings suggest that ip-10 or cxcr3 (an ip-10 receptor that is highly expressed in activated t cells) might have a role in worsening of airflow obstruction and airway inflammation, and may therefore be potential therapeutic targets. international guidelines on the management of asthma support the early introduction of corticosteroids to control symptoms and to improve lung function by reducing airway inflammation. however, not all individuals respond to corticosteroids to the same extent and it would be an desirable to be able to predict the response to corticosteroid treatment. several biomarkers have been assessed following treatment with corticosteroids including measures of lung function, peripheral blood and sputum indices of inflammation, exhaled gases and breath condensates. the most widely examined measures in predicting a response to corticosteroids are airway hyperresponsiveness, exhaled no (eno) and induced sputum. of these, sputum eosinophilia has been demonstrated to be the best predictor of a short-term response to corticosteroids. more importantly, directing treatment at normalizing the sputum eosinophil count can substantially reduce severe exacerbations. the widespread utilization of sputum induction is hampered because the procedure is relatively labor intensive. the measurement of eno is simpler, but incorporating the assessment of no in an asthma management strategy has not led to a reduction in exacerbation rates. the challenge now is to either simplify the measurement of a sputum eosinophilia or to identify another inflammatory marker with a similar efficacy as the sputum eosinophil count in predicting both the short-and long-term responses to corticosteroids. airway inflammation is associated with an increased expression and release of inflammatory reactants that regulate processes of cell migration, activation and degranulation. one study was done to quantify bronchial lavage (bal) fluid and serum levels of il-8, secretory leukocyte protease inhibitor (slpi), soluble intracellular adhesion molecules-1 (sicam-1) and scd14, as surrogate markers of inflammatory and immune response in asthma and copd patients with similar disease duration time (hollander et al. 2007 ). biomarkers were measured using commercially available elisa kits. the findings show that of four measured biomarkers, only the bal il-8 was higher in copd patients when compared to asthma. severe asthma is characterized by elevated levels of proinflammatory cytokines and neutrophilic inflammation in the airways. blood cytokines, biomarkers of systemic inflammation, may be a feature of increased inflammation in severe asthma. one study found that il-8 and tnf-α levels were higher in severe asthmatics than in mild-moderate asthmatics or in controls and, in conjunction with augmented circulating neutrophils, suggest the involvement of neutrophil-derived cytokine pattern (silvestri et al. 2006) . furthermore, in patients with severe asthma, tnf-α levels were positively correlated with both exhaled nitric oxide and circulating neutrophil counts. cytokine levels were elevated even though the patients were on high-dose inhaled steroids. this finding might reflect the inability of these drugs to significantly suppress production of this cytokine by airway cellular sources including epithelial cells and inflammatory cells. in patients with severe asthma there may be an imbalance between il-8 production and the blocking capacity of il-8 autoantibodies. the findings of this study may be clinically relevant and suggest that drugs that block tnf-α release or activity might represent a new treatment option in severe asthma. airway hyperresponsiveness is the main feature of asthma and is defined as an increase in the ease and degree of airway narrowing in response to brochoconstrictor stimuli. inflammation plays a central role in the pathogenesis of asthma and much of it can be attributed to helper t cell type 2 cytokine activation, the degree of which strongly correlates to disease severity. one of the inflammatory mediators in asthma is nitric oxide (no). the exhaled no level is elevated in asthma, particularly allergic asthma during the pollen season, and can predict asthma exacerbation. it may be clinically more useful to compare exhaled no values with a subject's previous values than to compare them with a population based normal range. cough variant asthma (cva) and atopic cough both present with bronchodilator-resistant non-productive cough but may be differentiated from and other causes of chronic non-productive cough by measuring exhaled no. exhaled no levels in patients with atopic cough are significantly lower than those in patients with cva and bronchial asthma (fujimura et al. 2008 ). there are no significant difference in the exhaled no levels between patients with cva and bronchial asthma. a uk study findings show that it is feasible to measure bronchial flux no concentration ( j no) and alveolar no concentration (c alv ) in 70% of children, with c alv levels potentially reflecting alveolar inflammation in asthma (paraskakis et al. 2006) . c alv and j no were measured from the fractional exhaled no (feno 50 ) at multiple exhalation flow rates in asthmatic children. although feno 50 and jno give essentially the same information, c alv is higher in asthmatic children than in normal children. this study also highlights the relationship between poor control of asthma and c alv (a biomarker of alveolar inflammation) but further work is needed to confirm the relevance of this. a novel nanosensor can detect a possible asthma attack before it begins. the minute sensor can be fitted into a hand-held device, and when a person blows into the device, it measures the no content of their breath. use of this device would provide asthma sufferers with a simple and cost effective way to monitor their asthma inflammation. an explanation for increased levels of exhaled no is nonenzymatic generation of no from nitrite due to airway acidification in asthmatics. reduced arginine availability may also contribute to lung injury by promoting formation of cytotoxic radicals such as peroxynitrite. as arginine levels decline, nitric oxide synthase (nos) itself can begin to generate superoxide in lieu of no, thereby favoring no consumption via the generation of peroxynitrite that could induce lung injury. this reduction in bioavailability of no via formation of species such as peroxynitrite could be further amplified by the rapid loss of sod activity during the asthmatic response. plasma arginase activity declines significantly with treatment and improvement of symptoms. additional studies are needed to determine whether measurements of plasma arginase activity will provide a useful biomarker for underlying metabolic disorder and efficacy of treatment for this disease. the arginase activity present in serum probably does not accurately reflect whole body arginase activity or that compartmentalized in the lungs, since the arginases are intracellular enzymes. because arginase is induced in monocytes in response to helper t cell type 2 cytokines, it is speculated that these cells are one likely source of the elevated arginase in serum, consistent with the localization of arginase expression within macrophages in the lungs. athough exhaled no is a clinically useful biomarker of eosinophilic airway inflammation in asthma, significant validation and investigation are required before exhaled breath condensate could be utilized for making decisions in clinical practice (simpson and wark 2008) . endothelins are proinflammatory, profibrotic, broncho-and vasoconstrictive peptides, which play an important role in the development of airway inflammation and remodeling in asthma. a study has evaluated the endothelin-1 (et-1) levels in exhaled breath condensate (ebc) of asthmatics with different degree in asthma severity (zietkowski et al. 2008) . et-1 concentrations in ebc of all asthmatic patients were significantly higher than in healthy volunteers. et-1 levels were significantly higher in patients with unstable asthma than in the two groups with stable disease. thus, measurements of et-1 in ebc may provide another useful diagnostic tool for detecting and monitoring inflammation in patients with asthma. the release of et-1 from bronchial epithelium through the influence of many inflammatory cells essential in asthma and interactions with other cytokines, may play an important role in increase of airway inflammation, which is observed after postexercise bronchoconstriction in asthmatic patients. ige plays a central role in the pathophysiology of asthma. the two essential phases in this pathophysiology are sensitization to allergen and clinical expression of symptoms on reexposure to the sensitizing allergen. omalizumab (xolair, genentech) is a recombinant humanized igg1 monoclonal anti-ige antibody that binds to circulating ige, regardless of allergen specificity, forming small, biologically inert ige-anti-ige complexes without activating the complement cascade. an 89-99% reduction in free serum ige (i.e., ige not bound to omalizumab) occurs soon after the administration of omalizumab, and low levels persist throughout treatment with appropriate doses. a total serum ige level should be measured in all patients who are being considered for treatment with omalizumab, because the dose of omalizumab is determined on the basis of the ige level and body weight. the dose is based on the estimated amount of the drug that is required to reduce circulating free ige levels to less than 10 iu per milliliter. lebrikizumab (roche) is an injectable humanized mab designed to block il-13, which contributes to key features of asthma. lebrikizumab improves lung function in adult asthma patients who are unable to control their disease on inhaled corticosteroids. il-13 induces bronchial epithelial cells to secrete periostin, a matricellular protein. increased levels of periostin, a biomarker of asthma, can be measured in the blood. in the milly phase ii trial, patients with high pretreatment periostin levels had greater improvement in lung function when treated with lebrikizumab, compared to patients with low periostin levels (corren et al. 2011) .the primary endpoint of the trial showed that at week 12, lebrikizumab-treated patients had a 5.5% greater increase in lung function from the baseline compared to placebo. lebrikizumabtreated patients in the high-periostin subgroup experienced an 8.2% relative increase from baseline forced expiratory volume in 1 second (fev1), compared with placebo. in the low-periostin subgroup, those patients on the drug experienced a 1.6% relative increase in fev1, compared with placebo. these results support further investigation of lebrikizumab as a personalized medicine for patients who suffer from moderate to severe uncontrolled asthma periostin enables selection of patients who will benefit most from the drug. cystic fibrosis (cf) is the most common serious genetic disease among caucasians in the us. the disease results from a defective gene that affects multiple aspects of cellular function. its most serious symptom is a build-up of thick, sticky mucus in the airways, which can lead to fatal lung infections. the usual method for screening and diagnosis is genotyping of cystic fibrosis transmembrane conductance regulator (cftr) gene mutations. antibody microarrays have been developed as a platform for identifying a cf-specific serum proteomic signature. serum samples from cf patients have been pooled and compared with equivalent pools of control sera in order to identify patterns of protein expression unique to cf. the set of significantly differentially expressed proteins is enriched in protein mediators of inflammation from the nfkappab signaling pathway, and in proteins that may be selectively expressed in cf-affected tissues such as lung and intestine. in several instances, the data from the antibody microarrays can be validated by quantitative analysis with reverse capture protein microarrays. in conclusion, antibody microarray technology is sensitive, quantitative, and robust, and can be useful as a proteomic platform to discriminate between sera from cf and control patients. saliva, because of the noninvasive collection process, shows great potential as a biological fluid for cf monitoring. extensive protein degradation and differentially expressed proteins have been identified in sputum as biomarkers of inflammation relating to pulmonary exacerbations of cf. use of fiber microarrays for measuring significant variations of the levels of six proteins in saliva supernatants -vegf, mmp-9, ip-10, il-8, il-1β and egf -as well as the correlations of these levels with clinical assessments, has demonstrated the value of saliva for cf research and monitoring (nie et al. 2015) . the prohosp study group. clinical risk scores and blood biomarkers as predictors of long-term outcome in patients with community-acquired pneumonia: a 6-year prospective follow-up study peripheral blood gene expression profiles in copd subjects biomarkers of extracellular matrix turnover are associated with emphysema and eosinophilic-bronchitis in copd increased expression of placenta growth factor in chronic obstructive pulmonary disease temporal changes in cytokine/chemokine profiles and pulmonary involvement in severe acute respiratory syndrome lebrikizumab treatment in adults with asthma kl-6: a serological biomarker for interstitial lung disease in patients with polymyositis and dermatomyositis exhaled nitric oxide levels in patients with atopic cough and cough variant asthma serum and bronchial lavage fluid concentrations of il-8, slpi, scd14 and sicam-1 in patients with copd and asthma association of biomarkers of inflammation and cell adhesion with lung function in the elderly: a population-based study brain natriuretic peptide is a prognostic parameter in chronic lung disease higher urine nitric oxide is associated with improved outcomes in patients with acute lung injury biomarkers of inflammation, coagulation and fibrinolysis predict mortality in acute lung injury plasma fibrinogen qualification as a drug development tool in chronic obstructive pulmonary disease. perspective of the chronic obstructive pulmonary disease biomarker qualification consortium correlations of salivary biomarkers with clinical assessments in patients with cystic fibrosis measurement of bronchial and alveolar nitric oxide production in normal children and children with asthma parathyroid hormone as a novel biomarker for chronic obstructive pulmonary disease: korean national health and nutrition examination survey high serum levels of tumour necrosis factor-α and interleukin-8 in severe asthma: markers of systemic inflammation? the role of exhaled nitric oxide and exhaled breath condensates in evaluating airway inflammation in asthma increased serum levels of chromogranin a in male smokers with airway obstruction systemic biomarkers of collagen and elastin turnover are associated with clinically relevant outcomes in copd pulmonary surfactant proteins a and d: innate immune functions and biomarkers for lung diseases ifn-gamma-induced protein 10 is a novel biomarker of rhinovirus-induced asthma exacerbations hyperuricemia is a biomarker of early mortality in patients with chronic obstructive pulmonary disease endothelin-1 in exhaled breath condensate of stable and unstable asthma patients key: cord-270184-bq5p2gs6 authors: alrubaiee, gamil ghaleb; al-qalah, talal ali hussein; al-aawar, mohammed sadeg a. title: knowledge, attitudes, anxiety, and preventive behaviours towards covid-19 among health care providers in yemen: an online cross-sectional survey date: 2020-10-13 journal: bmc public health doi: 10.1186/s12889-020-09644-y sha: doc_id: 270184 cord_uid: bq5p2gs6 background: the growing incidence of coronavirus (covid-19) continues to cause fear, anxiety, and panic amongst the community, especially for healthcare providers (hcps), as the most vulnerable group at risk of contracting this new sars-cov-2 infection. to protect and enhance the ability of hcps to perform their role in responding to covid-19, healthcare authorities must help to alleviate the level of stress and anxiety amongst hcps and the community. this will improve the knowledge, attitude and practice towards covid-19, especially for hcps. in addition, authorities need to comply in treating this virus by implementing control measures and other precautions. this study explores the knowledge, attitude, anxiety, and preventive behaviours among yemeni hcps towards covid-19. methods: a descriptive, web-based-cross-sectional study was conducted among 1231 yemeni hcps. the covid-19 related questionnaire was designed using google forms where the responses were coded and analysed using the statistical package for the social sciences software package (ibm spss), version 22.0. descriptive statistics and pearson’s correlation coefficient test were also employed in this study. a p-value of < 0.05 with a 95% confidence interval was considered as statistically significant. the data collection phase commenced on 22nd april 2020, at 6 pm and finished on 26th april 2020 at 11 am. results: the results indicated that from the 1231 hcps participating in this study, 61.6% were male, and 67% were aged between 20 and 30 years with a mean age of 29.29 ± 6.75. most (86%) held a bachelor’s degree or above having at least 10 years of work experience or less (88.1%). however, while 57.1% of the respondents obtained their information via social networks and news media, a further 60.0% had never attended lectures/discussions about covid-19. the results further revealed that the majority of respondents had adequate knowledge, optimistic attitude, moderate level of anxiety, and high-performance in preventive behaviours, 69.8, 85.10%, 51.0 and 87.70%, respectively, towards covid-19. conclusion: although the yemeni hcps exhibited an adequate level of knowledge, optimistic attitude, moderate level of anxiety, and high-performance in preventive behaviours toward covid-19, the results highlighted gaps, particularly in their knowledge and attitude towards covid-19. a cluster of pneumonia cases of unknown origin or causes was reported in wuhan, china, on 12th december 2019 [1] . among the initial 41 cases reported, most originated from vendors and dealers working in the huanan seafood market in wuhan [2] . the world health organisation (who) and the chinese authorities identified the causative agent as a new strain of coronavirus (sars-cov-2), named at that time as a coronavirus disease 2019, commonly referred to after that as covid-19 [3] . initially, sars-cov-2 quickly spread within china before dramatically spreading to other countries on a global scale [4] . on 11th march 2020, who declared the outbreak of covid-19 as a global pandemic [5] . since 12th september 2020, the virus has infected over 28,329,790 people, causing 911,877 deaths in 216 countries worldwide [6] . in yemen, the fight against covid-19 began on 10th april 2020 resulting from the initial case confirmed in ash shihr, the hadramout province, southern yemen. on 29th april 2020, five more cases of covid-19 were confirmed and registered in aden city, the temporary capital of yemen. after that, the cases started to increase in other cities daily. since 12th september 2020, 2011 cases of covid-19 have been reported in the republic of yemen, of which 1211 cases have since recovered, resulting in 583 deaths. however, the number of covid-19 cases is anticipated to be much higher than these figures, particularly given the transparency and the inability to effectively track and control the spread and number of cases reported in north yemen [6] . at present, the exact dynamics and transmission of the virus have not been determined. however, according to who, the virus can be transmitted via air-droplets and fomites during close and unprotected contact between an infected person and a healthy person [7] . according to the centre for disease control and prevention (cdc) sars-cov-2 is transmitted from person to person through close contact (within 6 ft); from an infected person via respiratory droplets during coughing or sneezing or when touching a surface or an object that is contaminated with the virus, including touching one's eyes, nose or mouth [8] . in most cases, those infected with covid-19 experience none or mild to moderate symptoms that are alleviated within several weeks of isolation. however, in contrast, it can cause severe respiratory syndrome or death, particularly in older people or patients with chronic health diseases [9] . similarly, healthcare providers (hcps) as the front line defence in treating patients with covid-19 are more susceptible to this spreading infection [10] . the who on 27th july 2020, estimated that close to 10% of all covid-19 cases globally, which accounts for nearly 1.5 million cases, were related to hcps. however, this figure is possibly underestimated, as, at that time, no systematic reporting or other measures were in place [11] . indeed, information released by the international council of nurses (icn), reported that until june 2020, nearly 230,000 hcps worldwide had acquired covid-19, with over 600 nurses dying [12] . in the context of yemen, at present, the ongoing war and civil unrest over the past six years within the country has severely impacted or destroyed the much of the country's infrastructure, with only 51% of the country's health facilities remaining in operation [13] . this consists of two testing centres and 500 ventilators for a population of nearly 30 million people. further, the country continues to suffer from limited testing capacity, critical shortage in health care supplies, including basic personal protective equipment (ppe) and other measures, limited by the ability to track the spread of the virus, especially, given the similarity covid-19 symptoms with other diseases that already prevail in the country [14] . all these factors place the country sadly in a unique if not, an uncompromising and dangerous position should covid-19 spread uncontrollably within the community, adding further burdening hcps' in the country. however, viewing this situation from a wider perspective, the rapid spread of covid-19 globally has caused considerable level anxiety, fear and panic among the population in countries worldwide, especially given that fact that hcps and the elderly are most vulnerable to the risk of infection [15] . according to who, the shortage of appropriate ppe and other preventive measures directly endangers hcps and represents a major cause of concern for countries [16] . likewise, the availability and correct use of ppe is critical in order to protect and safeguard frontline workers such as hcps in coping with though, what is of prime importance at this stage, is for hcps to adhere to applying these preventive measures, which largely depend on their knowledge, attitude, and practice in addressing this highly contagious virus [2] . nevertheless, yemeni hcps have been facing a double battle even before this pandemic eventuated given that yemen, according to who, is more than 50% below the basic health services global benchmark concerning the coverage of health care services. furthermore, while there are a limited number of skilled hcps in the country, they have not received salaries for nearly five years. surprisingly, the proportion of medics in yemen has been calculated as 10 medics to every 10,000 of the population, notwithstanding that the number of nurses and midwives that are available remains inadequate to fill this shortage. these issues are further compounded by the 'brain drain' in the country of people seeking better opportunities offshore and weakening medical health education [17] . therefore, to ensure the protection of hcps and safeguard yemen from covid-19, there is an urgent need to upskill and enhance the understanding and awareness of covid-19 among hcps. this study aims to assess the knowledge, attitude, fear, and anxiety, as well as the preventive behaviours of hcps towards covid-19. study area, study design, and study period a descriptive, web-based cross-sectional survey was conducted among yemeni hcps between 22nd april 2020, 6 pm and 26th april 2020, 11 am. all hcps who provided direct healthcare services to patients were invited to participate in the study. the questionnaire developed and used in this study was adapted from previously published studies based on the authors' permission [2, 18] . the questionnaire consisted of 58 items that sought to collect information on the respondents' knowledge, attitude, anxiety, and preventive behaviours toward covid-19. the questionnaire comprised of five parts. part (1) the socio-demographic characteristics such as age, sex, occupation, education level, years of working experience, and sources of covid-19 related knowledge. part (2) the respondents' knowledge (21-items). part (3) the respondents' attitude (10-items) and part (4) the respondents' anxiety (17-items). part (5) included questions on the respondents' preventive behaviours (10-items). scoring of knowledge, attitude, anxiety, and preventive behaviours the scoring system that was used in this study was adapted from the work of taghrir et al. [18] and roy et al. [2] . the 21-items related to knowledge were assessed with either a "yes" or "no" response in which each correct response was awarded a score of one (1), while a zero (0) score was assigned to an incorrect response. the scores ranged between 0 (no correct answers) and 21 (all answers are correct). a score of less than 11 was considered as having inadequate knowledge, and between 11 and 16, the scores were considered as having moderate knowledge, while a score of 17 and above was considered as having adequate knowledge. similarly, the 10-items signifying the respondents' attitudes were evaluated with a "correct" or "incorrect" response. the scores ranging between zero (0) and seven (7) were considered as acquiring a negative attitude, while the scores between eight (8) and ten (10) were considered as having a positive attitude. the 17-items related to anxiety were assessed via a 5-point likert scale, in which a score between 1 = "never" to 5 = "always". the total cumulative score ranged between 17 and 85. here, scores between 17 and 50 were considered as "low anxiety", and those scores ranging between 51 and 67 were considered as "moderate anxiety", while those ranging between 68 and 85 were considered as "high anxiety". the 10-items related to preventive behaviours were assessed with a "yes" or "no" response. a score between zero (0) and seven (7) was considered as "low performance", while a score between eight (8) and ten (10) was considered as "high-performance". three experts with a background in infectious disease and epidemiology (one specialist in infectious disease and two epidemiologists) were invited to participate in assessing the content validity of the questionnaire items. the reliability of the questionnaire items was based on a pilot study that included 40 participants, and the reliability was tested using a cronbach's alpha test with the results showing 0.79 for the knowledge part, 0.77 for the attitude part, 0.80 for the anxiety part, and 0.75 for the preventive behaviours part. at present, due to the outbreak of covid-19 and the specific preventive precautions and measures recommended by the ministry of health and population in yemen, an electronic web-based self-reported questionnaire was designed to comply with the recommendations. the internet link was distributed to the hcps via email, whatsapp, telegram, and other forms of social media. the criteria of the hcps to participated in the study needed to be living in the republic of yemen, regardless of gender, aged 20 years or older, was aware of the covid-19 outbreak, and who had signed a consent form to participate in the study. although participation in the study was voluntary, personal details of the participants were not recorded on the questionnaire. the respondents in receipt of the questionnaire were encouraged to forward the survey to other colleagues who may be interested in participating in the study as well. approval of the ethics committee of al-razi university was obtained before conducting the study. the respondents needed to confirm their willingness to participate on a voluntary basis by answering a "yes or no" question on a written informed consent form before being allowed to complete the online self-reporting questionnaire. the statistical package for social sciences (ibm spss), version 22.0 was used in the administration and analysis of the collected data. descriptive analyses using mean values and standard deviations for continuous variables and the count and percentages for the dichotomous or categorical variables were used in describing the data. the relationship between the study variables was assessed using pearson's correlation coefficient test. a pvalue of < 0.05 (two-tailed) with a 95% confidence interval was reported as significant for the correlation analysis. the respondents' socio-demographic data are presented in table 1 below. as shown in the table, over half (61.6%) of the hcps were male, with more than (67%) of respondents were aged between 20 and 30 years with a mean of 29.29 ± 6.75. regarding the occupation of respondents, 22.5% were physicians, followed by pharmacists (17.8%), laboratory technicians/workers (16.5%), and nurses (16.0%). regarding their education and working experience, 4.5% of respondents held a phd, 1.8% held a board position with 88.1% of all respondents having 10 years or less of working experience. concerning covid-19 related information sources, social media was highlighted as the main source (31.0%) followed by news media (26.1%). around 99.0% of respondents were aware of covid-19, with a further 60.0% having never attended lectures or discussions on covid-19. the level of knowledge among healthcare providers regarding the covid-19 pandemic is presented in fig. 1 below. twenty-one items within the questionnaire instrument having a "true" or "false" response choice was used to assess the extent of the respondents' knowledge regarding covid-19. as shown in fig. 1 , the majority of hcps (69.80%) were believed to have acquired an adequate level of knowledge regarding covid-19, while 29.70% had moderate knowledge, and only 0.60% were considered to have inadequate knowledge. the lower percentages were attributed to four (4) statements that discussed the importance of wearing face masks, the need to wear n95 face masks only during intubation, suction, bronchoscopy, and cardiopulmonary resuscitation, in treating the disease by usual antiviral drugs and antibiotics as the first-line (of defence) treatment, that scored 69.9, 68.8, 28.47, and 27.3%) respectively. the level of attitude of yemeni hcps towards the covid-19 pandemic is shown in fig. 2 below. the respondents' attitude towards the covid-19 pandemic was assessed using ten (10) items with a "yes" or "no" response choice. as shown in fig. 2 , the findings indicate that the majority of respondents (85.10%) had a positive attitude, while 14.90% had a negative attitude towards the covid-19 pandemic. however, although the vast majority of respondents exhibited a high degree of optimism and attitude towards the pandemic, 75.1% still believed that they would not contract the disease, and almost 29.4% were willing to move to other locations within the country to be safe and secure during the pandemic. the level of anxiety among yemeni hcps toward the covid-19 pandemic is illustrated in fig. 3 below. the level of anxiety among hcps was assessed using 17-items, with the answers rated against a 5-point likert ranging between 0 = "never" to 5 = "always". as shown in fig. 3 , the findings indicate that just of half of the respondents had a moderate level of anxiety towards the pandemic, 27.70% had a high level of anxiety, and 21.30% had a low level of anxiety towards the covid-19 pandemic. healthcare providers' self-reported preventive behaviours toward the covid-19 pandemic ten-items each requiring a "yes" or "no" response was used to assess the respondents' level of self-reported preventive behaviours towards covid-19. five (5) items were to avoid or reduce visiting public places in their daily life. one item was related to preventive behaviour such as coughing/sneezing, two items were related to hand washing and frequently disinfecting surface areas on a frequent basis, and one item was related to talking with family and friends about preventive measures associated with of covid-19. as can be seen in fig. 4 , the vast majority (87.70%) of respondents exhibited sufficient preventive behaviours, while only 12.30% demonstrated low preventive behaviours. the lowest score (84.8%) was related to cancelled or postponed activities and events such as eating out, sporting activities, and meeting with colleagues. association between the respondents' socio-demographic characteristics and their knowledge, attitude, anxiety, and preventive behaviours the association between the respondents' sociodemographic characteristics and their knowledge, attitude, anxiety, and preventive behaviours towards the covid-19 pandemic are reflected in table 2 below. as the correlation between hcps knowledge, attitude, anxiety, and preventive behaviour scores is shown in table 3 below. the correlations were divided into four (4) levels based on the following criteria: weak = 0-0.25, fair = 0.25-0.5, good = 0.5-0.75, and excellent = 0.75 or greater [19] . as shown in table 3 , there was a significant positive linear correlation between knowledge-attitude (r = 0.176, p < 0.001), knowledge-anxiety (r = 0.136, p < 0.001), knowledge-preventive behaviours (r = 0.320, p < 0.001), attitude-anxiety (r = 0.078, p < 0.006), attitude-preventive behaviours (r = 0.293, p < 0.001) and anxiety-preventive behaviours (r = 0.284, p < 0.001). accordingly, the results indicate the relationship between knowledge, attitude, anxiety, and preventive behaviours towards the covid-19 pandemic. since the first confirmed case announced in yemen on 10th april 2020, in ash shihr, (a port city in the hadhramaut province of southern yemen), rising fear and anxiety extended to other provinces from the possibility of contracting covid-19 and its outbreak. the hcps as the front line of defence and older people were the most vulnerable in contracting covid-19 that the majority of other people. during this time, there was also a critical shortage of ppe given the current conflict in the region, and civil unrest in the country [14] . equally important was the need during this period to understand the level of preparedness of hcps' in order to cope with the outbreak of covid-19 in the country. this fact motivated the need to undertake the current study aiming to explore the level of knowledge, attitude, anxiety, and preventive behaviours among hcps towards the outbreak of covid-19 in the country. the findings have shown that while the majority of respondents (60.0%) had never attended covid-19 training courses with respect to covid-19, most (69.80%) had acquired an adequate level of knowledge about the outbreak of the virus. on the other hand, the four (4) statements reflecting the importance of wearing face masks in the community, having to wear n95 face masks only during intubation, suction, bronchoscopy, and cardiopulmonary resuscitation, the possibility to treat the disease using antiviral drugs and antibiotics as first-line treatment scored the lowest at 69.9, 68.8 28.47 and 27.3%, respectively. this result possibly highlights the need to direct more attention toward developing educational courses and programmes related to covid-19. likewise, the adequate level of knowledge among the respondents could be attributed to their educational level since most (73.0%) of respondents held a bachelor's degree or higher, (i.e. a master's degree). accordingly, an educated professional group such as this could help to collect knowledge of covid-19 from a variety of 1% of hcps seemed to use social media and news media as the main source of information, which is a significant concern given the reliability of this information. this is because utilising such media can mislead hcps by spreading fabricated and unverified information. it is also worth highlighting that the respondents' level of knowledge was only statistically significantly different according to their age, occupation, and educational level. furthermore, these results are consistent with the results of a previous study [20] which reported that the level of knowledge towards covid-19 differs significantly across different age groups, educational levels, and levels of different professions. the results are also in line with the results of giao et al. [9] and saqlain et al. [21] regarding the difference in the level of respondents' anxiety based on their profession. concerning the level of respondents' attitude, it was found to differ based on the participants' occupations significantly. this corroborates with a study by giao et al. [9] , which reported a significant association between respondents' attitude and their occupation. however, in contrast, the result seems in differ from the results of saqlain et al. [21] and rahman and sathi [20] , who stated that a positive attitude toward covid-19 did not significantly vary nor differ across different occupations. equally, the results revealed that the respondents' level of anxiety was significantly different based on their gender and educational levels. these results support the findings reported by al-hanawi et al. [22] that respondents' level of worry or concern attributed to covid-19 differs significantly across gender and educational level. this result is also in line with previous studies [23, 24] carried out in china, indicating that females have higher levels of anxiety compared to males. similarly, the respondents' level of self-reported preventive behaviour significantly differed according to their gender, occupation, years of working experience, and educational level. these results are in agreement with the results by rahman and sathi [20] on the variation of respondents' preventive behaviour according to different age groups, al-hanawi et al. [25] regarding the gender of respondents, saqlain et al. [21] regarding the respondents' years of working experience and khasawneh et al. [26] about the respondents' educational level. with respect to the attitude of the respondents', the result showed that 85.10% of respondents had an optimistic attitude towards covid-19, though unfortunately, the findings also revealed that 75.1% believe that they avoid infection, and close to 29.4% of respondents were willing to relocate to protect themselves from covid-19. this result suggests that most of the respondents were either confident of protecting themself from the virus or unaware about the nature of covid-19 how contagious it is. similarly, one-third of respondents would look to leave their work and relocate for fear of infection, which contributed to the shortage of hcps if the situation was to become more serious, i.e. rising infections. accordingly, based on the results and the information presented above, it is imperative given the seriousness of the issue that training courses and awareness programmes be created on covid-19 and disseminating such information via official websites. regarding the high level of optimism and attitude of respondents in the current study, this could also be explained, at this stage, by the limited number of cases reported in yemen, and the adequate level of knowledge they had gained since the outbreak of the virus, and until this research study was conducted. according to roy et al. [2] , adequate awareness often leads to optimistic attitudes, which could positively affect the preparedness of hcps to address pandemic issues. furthermore, the results of the current study showed a positive correlation between the respondents' knowledge and their attitude, which could support this conjecture. moreover, the findings of the current study are consistent with a study by giao et al. [9] , that healthcare workers had a high level of knowledge and a positive attitude towards covid-19. these findings are also in line with the results of a cross-sectional study conducted among saudi health college students [27] , which revealed that more than half of the students had a positive attitude towards mers-cov. concerning the respondents' level of anxiety, the results indicated that nearly half (51%) of the respondents had a moderate level of anxiety and 27.70% had a high level of anxiety regarding the covid-19 outbreak. according to roy et al. [2] , fear and anxiety within a population are usually expected given the significant impact of the pandemic on the community, which could also affect the mental well-being of people and influence their behaviour in the wider community. in this study, only 27.7% of the respondents exhibited a high level of anxiety concerning covid-19, which could possibly be attributed to their level of knowledge given they were still experiencing the first wave of the virus covid-19. interesting, the current study indicated lower anxiety level results compared to other studies that were carried out during the outbreak as reported by huang and zhao [28] on chinese healthcare workers and nemati et al. [29] on iranian nurses. in these studies, the results showed that the level of anxiety among healthcare workers was higher compared to other people. the high anxiety level among the hcps could be attributed to the uncontrolled nature of the pandemic and concerns of becoming infected, particularly given the shortage of healthcare institutions and ppe. concerning the self-reported preventive behaviours, it was found that the majority (87.70%) of respondents had a high-performance level of preventive behaviours towards covid-19, which could be attributed to the having an adequate level of knowledge and awareness among the respondents towards covid-19. as reported in a previous study, those who had acquired adequate knowledge exhibited optimistic attitudes and appropriate, it not proactive practices toward covid-19 [30] . another study revealed that the level of good or sound knowledge in a given population about covid-19 is significantly reflected in their behaviour and attitude [2] . however, the findings from the current study were seemingly lower than a study conducted during covid-19 by taghrir et al. [18] on medical students in iran finding that 94.2% of the respondents showed relatively high-performance in preventive behaviours toward covid-19. according to the results of this study, females were found to exhibit a higher-performance-level in preventive behaviours compared to their male counterparts, possibly due to their better compliance in preventive measures towards covid-19. this result is consistent with the result by taghrir et al. [18] that females demonstrated more precautionary behaviours compared to males. notwithstanding, another key result in this study was of the positive linear correlation between knowledge-attitude, knowledge-anxiety, knowledgepreventive behaviours, attitude-anxiety, attitudepreventive behaviours, and anxiety-preventive behaviours. this result confirms the relationship between the respondents' level of knowledge and their level of anxiety, attitude, and preventive measures towards covid-19. such a correlation could be explained by the theory of reasoned action (tra) [31] , which states that a person's intention to carry out a specific behaviour is determined by their attitude towards this behaviour. in the current study, the findings are in line with the results of other studies [20, 30, 32] showing that acquiring a good level of knowledge of covid-19 is correlated with optimistic attitudes and proper practices towards covid-19. however, in contrast, the results of this study disagree with the results by nemati et al. [29] in which they found that most iranian nurses displayed their anxiety and that of their families as a result of covid-19 though the knowledge they had acquired about covid-19 to be sufficient. lin et al. [24] found that the level of knowledge of covid-19 did not influence anxiety levels. however, they found that higher levels of attitude were highly associated with high levels of anxiety. furthermore, in a study carried out in hong kong by leung et al. [33] , the results revealed that the level of anxiety during the sars outbreak was highly associated with behavioural responses such as wearing face masks. in a separate study by roy et al. [2] , they revealed that people's level of anxiety correlated with their behaviour. the results showed that under the effect of rumours, people tend to modify their behaviour positively compared to an undesirable one. reuben et al. [20] also reported the relationship between respondents' attitudes and their preventive behaviours. regarding the relationship between the respondents' attitudes and their preventive behaviour, rubin et al. [34] conducted a study during the swine flu outbreak, reporting a significant association between the respondents' attitude and their behavioural change (e.g. performing one or more avoidance behaviours). nevertheless, several limitations were inherent in this study which should be addressed for future research. the first limitation concerns the nature of collecting the data. the data in this study were collected via a webbased survey since it was not possible to conduct a faceto-face survey among yemeni hcps during given the uncertainty surrounding the outbreak of the virus and level of contagious. therefore, the data may be seen as being less reliable having less accountability compared to face-to-face interviews and the lack of a trained interviewer. secondly, collecting the data was challenging, given the availability of respondents and cooperation. thirdly, the exclusiveness of the study to hcps. therefore, future research should involve a more diverse community or population, employing a community-based study design. the results of this study have demonstrated that the majority of hcps in yemen had acquired an adequate level of knowledge of covid-19. however, their level of knowledge concerning situations that require wearing n95 masks and the possibility of using current antiviral drugs and antibiotics as the first-line of treatment for covid-19 could be improved through training and other programmes. the moderate anxiety level, as revealed in this study, would undoubtedly increase, particularly if the prevalence curve of the outbreak of covid-19 elevated, and the situation became much worse. therefore, implementing preventive measures and regulation strategies to control the emotional status among hcps is recommended. in addition, organisations such as who and the ministry of public health and population in yemen must continue to provide updated information regarding covid-19 to warrant better control concerning covid-19. a novel coronavirus from patients with pneumonia in china study of knowledge, attitude, anxiety & perceived mental healthcare need in indian population during covid-19 pandemic covid-19) and the virus that causes it accessed 26 who declares covid-19 a pandemic who: director-general's opening remarks at the media briefing on covid-19 who: coronavirus disease (covid-2019) situation reports geneva: world health organization cdc: interim infection prevention and control recommendations for patients with suspected or confirmed coronavirus disease 2019 (covid-19) in healthcare settings knowledge and attitude toward covid-19 among healthcare workers at district 2 hospital clinical characteristics of 138 hospitalised patients with 2019 novel coronavirus-infected pneumonia in wuhan, china who: coronavirus disease 2019 (covid-19): situation report, 82. 2020. 12. icn calls for data on healthcare worker infection rates and deaths accessed 26 covid-19 in humanitarian crisis: a double emergency covid-19 in yemen: preparedness measures in a fragile state the psychological impact of quarantine and how to reduce it: rapid review of the evidence shortage of personal protective equipment endangering health workers worldwide health care workers face a double battle -covid-19 in a conflict zon covid-19 and iranian medical students; a survey on their related-knowledge, preventive behaviours and risk perception statistical power analysis for the behavioral sciences: jacob cohen knowledge, attitude, and preventive practices toward covid-19 among bangladeshi internet users. electronic electron knowledge, attitude, practice, and perceived barriers among healthcare workers regarding covid-19: a cross-sectional survey from pakistan psychological distress amongst health workers and the general public during the covid-19 pandemic in saudi arabia psychological health, sleep quality, and coping styles to stress facing the covid-19 in wuhan knowledge, attitudes, impact, and anxiety regarding covid-19 infection among the public in china attitude and practice toward covid-19 among the public in the kingdom of saudi arabia: a cross-sectional study medical students and covid-19: knowledge, attitudes, and precautionary measures. a descriptive study from jordan. front public health knowledge and attitude toward middle east respiratory syndrome coronavirus among heath colleges' students in najran, saudi arabia generalised anxiety disorder, depressive symptoms, and sleep quality during covid-19 epidemic in china: a web-based crosssectional survey. medrxiv preprint assessment of iranian nurses' knowledge and anxiety toward covid-19 during the current outbreak in iran knowledge, attitudes, and practices towards covid-19 among chinese residents during the rapid rise period of the covid-19 outbreak: a quick online crosssectional survey understanding and promoting aids-preventive behavior: insights from the theory of reasoned action attitudes and practices towards covid-19: an epidemiological survey in north-central nigeria longitudinal assessment of community psychobehavioral responses during and after the 2003 outbreak of severe acute respiratory syndrome in hong kong public perceptions, anxiety, and behaviour change in relation to the swine flu outbreak: cross sectional telephone survey publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations we would like to thank all the healthcare providers who agreed to participate in this study and for their support in distributing the link to the questionnaire to other colleagues to participate.authors' contributions gga, taha, and msaa were involved in the inception of the idea and study design. taha and msaa were responsible for data collection. gga supervised, and taha performed the data analysis. gga drafted and finalised the manuscript. all the authors contributed to the interpretation of the data, reviewing, and drafting the manuscript, and approving the final manuscript. this study did not receive any form of grants or financial support. data are available from the corresponding author on a reasonable request. this study obtained ethical approval from the ethics committee for research of al-razi university. the participants provided their consent to participate voluntarily through answering a "yes or no" question in the online written informed consent form before they were allowed to complete the questionnaire. not applicable. the authors declare they have no competing interests. key: cord-285151-zynor0b2 authors: eisenhut, michael title: neopterin in diagnosis and monitoring of infectious diseases date: 2013-12-08 journal: j biomark doi: 10.1155/2013/196432 sha: doc_id: 285151 cord_uid: zynor0b2 neopterin is produced by activated monocytes, macrophages, and dendritic cells upon stimulation by interferon gamma produced by t-lymphocytes. quantification of neopterin in body fluids has been achieved by standard high-performance liquid chromatography, radioimmunoassays, and enzyme-linked immunosorbent assays. neopterin levels predict hiv-related mortality more efficiently than clinical manifestations. successful highly active antiretroviral therapy is associated with a decrease in neopterin levels. elevated neopterin levels were associated with hepatitis by hepatitis a, b, and c viruses. serum neopterin levels were found to be a predictor of response to treatment of chronic hcv infection with pegylated interferon combined with ribavirin. neopterin levels of patients with pulmonary tuberculosis were found to be higher in patients with more extensive radiological changes. elimination of blood donors with elevated neopterin levels to reduce risk of transmission of infections with known and unknown viral pathogens has been undertaken. neopterin measurement is hereby more cost effective but less sensitive than screening using polymerase chain reaction based assays. in conclusion neopterin is a nonspecific marker of activated t-helper cell 1 dominated immune response. it may be a useful marker for monitoring of infectious disease activity during treatment and for more accurate estimation of extent of disease and prognosis. neopterin was first isolated from larvae of bees, in worker bees and in royal jelly in 1963, and subsequently from human urine by sakurai and goto in 1967 [1] . neopterin or 2-amino-4-hydroxy-6-(d-erythro-1 ,2 ,3trihydroxypropyl)-pteridine is produced from guanosine triphosphate via guanosine triphosphate cyclohydrolase i (gtpch i) by activated monocytes, macrophages, dendritic cells, and endothelial cells and to a lesser extent in renal epithelial cells, fibroblasts, and vascular smooth muscle cells upon stimulation mainly by interferon gamma and to a lesser extent by interferon alpha and beta with its release being enhanced by tumor necrosis factor [2, 3] . gtpch i mrna expression is synergistically and independently induced by interferon gamma through the jak2/stat pathway of nuclear transcription regulation and through tnf by the nf-kappab pathway (see figure 1 ) [4] . release in response to cytokines released by t-lymphocytes and natural killer cells make neopterin an indicator of activation of cell mediated immunity including release by infections associated with activation of t-lymphocytes and natural killer cells, malignancies, autoimmune diseases, rejection of transplanted organs, and atherosclerosis. at its first isolation in the 1960s neopterin was detected in the pupae of bees by anion exchange chromatography followed by paper chromatography [1] . in the seventies gas chromatographic-massfragmentographic methods were described allowing measurement in urine. subsequently detection and quantification of neopterin succeeded in serum, urine, and other body fluids using standard high pressure and by reverse-phase high-performance liquid chromatography with fluorescence detection. later simpler radioimmunoassays and more recently enzymelinked immunosorbent assays have been developed which are suitable for large numbers of samples [1] . semiquantitative measurement with a dipstick system using polyclonal antineopterin antibodies has been validated and may be suitable for bedside testing and in the setting of developing countries [5] . levels have been observed, which correlate with the activity of disease. this was first described in 1979 [6] and subsequently neopterin elevations were noted in infections with hepatitis viruses, epstein-barr, cytomegalo, measles, mumps, varicella zoster, rubella, and influenza viruses [1, [7] [8] [9] [10] [11] . elevated neopterin levels in body fluids were found at the end of the incubation period before the onset of clinical symptoms. the highest neopterin levels occur just before specific antibodies against the virus become detectable, which is about two to four weeks after onset of increased neopterin production. in acute varicella zoster virus infection peak neopterin levels were observed at the end of the appearance of the rash and in measles virus infection one to three days after appearance of the rash [12, 13] . immunisation with live viruses, for example, measles, mumps and rubella and virus vaccine, resulted in a significant increase of neopterin independent of presence of any symptoms. in measles vaccination neopterin levels were observed to rise at a median of 5 days after vaccination about 7 days before the appearance of antibodies [13] . these investigations point to a future application of measurements neopterin as a correlate of a successful vaccination. neopterin should be investigated as a marker to evaluate protective efficacy of vaccines stimulating cell mediated immunity against mycobacterial, parasitic, or viral diseases. the magnitude of the elicited neopterin levels could be put into relationship to incidence of the disease immunised against the population of immunised children. serum neopterin levels were also found to be significantly elevated in symptomatic dengue virus infections with levels higher than in measles and influenza virus disease [14] . levels correlated with duration of fever and severity of disease [14, 15] . investigations into the physiological functions of neopterin in viral infections revealed that it is able to delay the development of the cytopathic effect of coxsackie b5 virus in hep-2 cells [16] . a proposed mechanism is the stimulation of inducible nitric oxide synthase expression leading to an increase in nitric oxide production. other mechanisms include the induction of the translocation of the nuclear factor-kappa b to the nucleus. infection. testing of 328 samples of 29 hiv infected individuals found that 44/68 (64.7%) of samples, which were hiv-1 rna and p24 antigen positive had elevated neopterin levels (>10 nmol/l). 6/216 (2.8%) samples, which were both hiv-1 and p24 antigen negative had elevated neopterin levels [17] . neopterin levels were also found to be significantly elevated in hiv-2 infection compared to controls [18] . studies investigated markers of immune activation for their usefulness as prognostic markers in hiv infection and showed an increase of neopterin levels in people with hiv infection compared to patients without hiv infection [19] [20] [21] [22] . neopterin levels hereby were found to increase early in the course of hiv infection preceeding cd4+-t-cell decline and clinical manifestations of aids [23, 24] . plasma neopterin levels were found to correlate with plasma hiv viral load [25] . neopterin levels were found to predict hiv related mortality [26, 27] . a retrospective study compared 2 microglobulin, immunoglobulin a, g, and m, adenosine deaminase, and neopterin levels above normal range as predictors of clinical or immunological deterioration in 256 patients with hiv infection. changes in 2 microglobulin levels showed the greatest sensitivity to detect worsening (43%) with neopterin slightly less sensitive (41.9%) followed by immunoglobulin levels (26.8-35 .2%) and adenosine deaminase levels with 21.8% having the lowest sensitivity [28] . marker for viral load to monitor response to antiretroviral treatment. in a land mark study the effects of dual reverse transcriptase inhibitor (rt) therapy and highly active antiretroviral therapy (haart) on neopterin levels in patients with hiv infection were compared to hiv uninfected controls, hiv infected patients not on treatment, and patients who had stopped treatment [22] . rt inhibitor treatment decreased circulating levels of neopterin (mean of 15.6 for treated versus a mean of 22.3 ng/ml for untreated hiv patients, < 0.04). haart decreased neopterin levels significantly further. this confirmed results of a previous study on the effects of haart on neopterin levels [29] . neopterin levels in patients who discontinued haart became similar to untreated hiv patients. neopterin may be a particularly useful surrogate marker for monitoring of control of hiv replication in settings in developing countries where hiv rna viral load measurement is not available and may be a cheaper alternative particularly if semiquantitative dip stick tests are used for urine samples [5] . longitudinal serial measurements in the same individual could overcome difficulties with interpretation in settings where chronic parasitic (malaria) or bacterial schistosoma mansoni praziquantel blood serum levels normalized on treatment [43, 44] (tuberculosis) infections may elevate the baseline neopterin level and could allow monitoring of response to antiretroviral treatment in the absence of resistance testing and provide means to monitor compliance in the outpatient setting (see table 1 for list of diseases in which neopterin levels have been used to monitor treatment response). hepatitis. the first study investigating the role of neopterin in specific forms of viral hepatitis tested urinary levels in patients with hepatitis a, hepatitis b, and non-a, non-b hepatitis virus infection [9] . the authors noted that in 51 patients with acute viral hepatitis 49 patients had elevated urinary neopterin levels with the highest levels found in patients with acute hepatitis a. while all patients with active hepatitis b had elevated neopterin levels, 49/62 hbsag carriers (77%) had normal urinary neopterin levels. the authors noted that neopterin levels were not a reflection of hepatocellular damage as 3 patients with alcoholic hepatitis had normal urinary neopterin levels. in order to address the question whether neopterin is a useful marker for early detection of viral infection in donated blood products before seroconversion, one study investigated neopterin levels in anti-hcv-negative specimens, which were hcv rna and hcv core antigen positive. the investigators found that 8/217 (3.7%) had elevated neopterin levels (>10 nmol/l). 4/115 (3.5%) specimens positive for hbv dna had elevated neopterin levels [17] . in 106 patients with thalassemia major receiving multiple blood transfusion significantly more patients with histologically proven chronic hepatitis (19/21 were anti hcv antibody positive) had elevated blood neopterin levels compared to patients with siderosis of the liver [45] . alanine aminotransferase levels in hcv infected persons correlated significantly with neopterin levels. serum neopterin levels were found to be a useful predictor of response to treatment of chronic hcv infection with pegylated interferon combined with ribavirin. neopterin concentrations were evaluated in 260 hcv patients treated by pegylated interferon combined with ribavirin. mean and median pretreatment neopterin concentrations were lower in patients with sustained virological response than in nonresponders. the rate of response was twofold higher among patients with pretreatment neopterin levels <16 nmol/l than in patients with neopterin levels ≥16 nmol/l, even after controlling for hcv genotype status [38] . a recent study investigated specifically whether serum neopterin levels can discriminate between patients with replicative ( = 30) and nonreplicative ( = 25) hbv carriage [46] . replicative hbv carriage was defined as hbv dna >5 pg/ml by hybrid capture system. neopterin levels had a mean of 14.5 nmol/l in replicative versus 8.8 nmol/l in nonreplicative hbv carriers ( < 0.05). this result was not reproducible in another study, which found that in patients with replicative hbv infection ( = 30) mean serum neopterin level was 24.73 nmol/l and in nonreplicative hbv ( = 30) 14.8 nmol/l a difference, which was not statistically significant [47] . this may have been due to large standard deviations and small numbers in groups. a more recent investigation found that in chronic hepatitis the mean ± sd serum neopterin levels were 14.2 ± 5.6 nmol/l, 20.3 ± 7.9 nmol/l in patients with liver cirrhosis and 5.2 ± 1.4 nmol/l in the control group. serum neopterin levels were significantly higher in patients with chronic hepatitis ( = 0.005) and cirrhosis patients ( = 0.008) than in control subjects. cirrhotic patients had significantly higher serum neopterin levels than patients with chronic hepatitis ( = 0.004). there was a positive correlation between serum neopterin levels and alanine aminotransferase levels in patients with chronic hepatitis ( = 0.41, = 0.004) and cirrhotic patients ( = 0.39, = 0.005). positive correlations were detected between serum neopterin levels and inflammatory score in patients with chronic hepatitis ( = 0.51, = 0.003) and cirrhotic patients ( = 0.49, = 0.001) [48] . infections. neopterin has been investigated as a marker to distinguish viral from bacterial lower respiratory tract infections. the investigators found that serum neopterin levels were elevated (>10 nmol/l) in 96% of patients with viral lrti. the median serum neopterin concentration was almost 2-fold higher in the viral lrti group than bacterial lrti patients (30.5 versus 18.7 nmol/l) and 5-fold higher than those in healthy controls. the specificity for correct identification of viral lrti was 69.5% for a cut-off of >15 nmol/l [49] . serial monitoring of serum neopterin levels in patients with severe acute respiratory syndrome (sars) associated virus revealed that all ( = 129) investigated patients had elevated neopterin levels by day 9 [50] . duration of pyrexia in sars patients correlated positively with neopterin levels. patients on steroid therapy had significantly lower neopterin levels. measurement of neopterin in isolation and in relationship to other inflammatory markers like procalcitonin and c-reactive protein were investigated for discriminatory power between viral and bacterial lower respiratory tract infections. investigators used the crp/neopterin ratio (c/n ratio) to discriminate viral and bacterial etiology of respiratory tract infections. in a study conducted in hong kong sera obtained on the day of hospitalization for lrti from 139 patients with confirmed bacterial etiology and 128 patients with viral etiology were examined. a further 146 sera from healthy chinese subjects with no infection were included as controls. the area under the receiver operating characteristic (roc) curve (area under curve [auc]) for distinguishing bacterial from viral infections was 0.838 for crp and 0.770 for pct. the auc for distinguishing viral from bacterial infections was 0.832 for neopterin. when the markers were used in combination, auc of roc of the c/n ratio was 0.857, whereas (crp × pct)/neopterin was 0.856 [49] . in a subsequently reported study the median of the c/n ratio was 10 times higher in patients with bacterial aetiology than with viral aetiology (12.5 versus 1.2 mg/nmol; < 0.0001) and 42 times higher than those in healthy subjects (12.5 versus 0.3 mg/nmol; < 0.0001). the area under the receiver operator characteristic curve for the c/n ratio was 0.840. a cut-off value of "c/n ratio >3" for ruling in/out bacterial/viral infection yielded optimal sensitivity and specificity of 79.5% and 81.5%, respectively [51] . early studies showed elevated neopterin levels in cerebrospinal fluid (csf) of patients with aseptic meningitis and herpes simplex and measles virus encephalomyelitis [52] [53] [54] . csf levels of neopterin seem to reflect intrathecal production by microglia as pterins have a low permeability across the blood brain barrier with a serum-to-csf distribution at a quotient of 1/40 [55] . it has recently been established that normal csf neopterin is brain-derived. the interindividual variation of csf neopterin in healthy adults was found not to depend on serum neopterin concentration variation (coefficient of variation, cv-csf = 9.7% < cv-serum = 24.5%). additionally individual normal csf neopterin concentrations were found to be invariant to the variation of the albumin quotient, qalb; that is, csf neopterin does not derive from leptomeninges [56] . patients with viral meningitis had elevated csf neopterin levels compared to healthy controls but normal serum levels [54] . csf neopterin levels correlated hereby with csf monocytic cell count. patients with various forms of encephalitis including those caused by herpes simplex virus, varicella zoster virus, and tick borne encephalitis virus had significantly elevated csf neopterin levels compared to controls and higher levels than in patients with viral meningitis without overlap of levels in the two conditions. in hiv infection there was a clear relationship between the severity of aids-related dementia and csf neopterin levels [12, [30] [31] [32] . higher csf hiv viral loads were associated with higher csf neopterin levels [33] . after commencement of combination antiretroviral therapy (art), csf neopterin decreased markedly but remained slightly above normal levels in a substantial number of patients despite several years of receiving art [32, [34] [35] [36] . even patients with systemic virological failure exhibit a substantial reduction of csf neopterin concentrations, though above that of virologically suppressed patients [37] . in patients on combination art, the lowest csf neopterin levels have been found in patients with the lowest csf viral loads (<2.5 copies/ml) [32] . no significant difference in csf neopterin concentrations was found between those treated with protease inhibitor-and nonnucleoside reverse transcriptase based regimens in combination with 2 nucleoside analogues [57] . this would support the idea that viral replication within or close to the csf, at least to some extent, is partly driving the inflammatory response. it has also been suggested that an inflammatory response, once triggered, may lead to a self-sustaining state of cellular activation as has been seen in patients with herpes simplex virus type-1 encephalitis [58] . findings in this study are consistent with these reports. hiv rna levels measured in csf or plasma were not significantly associated with csf neopterin trajectories. in addition, all study participants had experienced virologic control to the limit of standard detection as a result of their treatment and csf neopterin levels were the only factor strongly associated with subsequent decay rates and the ultimate set-point levels [32] . patients with bacterial infections with species other than mycobacteria showed significantly lower urinary neopterin levels compared to patients with viral infections in one study [59] but no statistically significant difference in a more recent study [60] . within the group of bacterial infections it was shown that patients with symptoms for at least 5 days had significantly higher neopterin concentrations than patients with acute illness. this applied particularly to bacterial pneumonia. patients with urinary tract infections were found to have similar levels to patients with viral infections with data on urinary neopterin concentrations but not serum concentrations. thus it remains unclear whether local production of neopterin takes place in urinary tract infections and serum neopterin would stay low. there was no significant difference in neopterin levels between patients with febrile neutropenia and underlying haematological and oncological conditions and gram-negative versus gram-positive infections [61] . in patients on an intensive care unit with sepsis journal of biomarkers 5 and septic shock urinary neopterin/creatinine ratios were found to be significantly higher compared to patients with other forms of systemic inflammatory responses syndromes [62] and serum neopterin levels were higher in nonsurvivors compared to survivors of sepsis and multiorgan failure scores correlated with neopterin levels [63] [64] [65] [66] . in this context it was however noted that neopterin levels correlated negatively with reduced renal function reflecting renal failure causing a reduced excretion of neopterin. future studies could correct for reduced excretion due to reduced renal function by calculation of the serum neopterin/creatinine ratio. investigations on critically ill patients on intensive care units evaluated neopterin levels as tool to discriminate patients with systemic inflammatory response syndrome with and without infectious etiology. neopterin levels were found to have a specificity of 78% for discriminating infectious and noninfectious etiology of critical illness [66] . bacterial meningitis was associated with both elevated serum and csf neopterin levels compared to controls [55] . in lyme neuroborreliosis-a late complication of infection by the tick-born spirochete borrelia burgdorferi-high neopterin concentrations were found in csf of patients, whereas serum neopterin levels were not markedly increased, confirming intrathecal neopterin production [67] . infection with treponema pallidum subsp. pallidum (syphilis) was not associated with elevated neopterin levels [18] . in melioidosis by pseudomonas pseudomallei neopterin concentrations were found to be significantly higher than controls [12] . in brucellosis neopterin levels were with a mean 52.5 mmol/ml significantly higher than healthy controls and patients with tuberculosis [68] . in leprosy caused by mycobacterium leprae 75% of patients with tuberculoid and lepromatous leprosy presented with elevated urinary neopterin excretion [69] . on the basis of in vitro and in vivo data showing that macrophages release neopterin in response to stimulation by t-lymphocytes [70, 71] fuchs et al. [39] investigated urinary neopterin levels by hplc in 55 patients with culture confirmed pulmonary tuberculosis and compared them with 417 normal controls. 83% of patients had levels above the upper tolerance limit (containing with 95% probability 97.5% of healthy controls). neopterin levels were higher than age and gender matched controls for every extent of pulmonary disease and correlated with its extent. the correlation with extent of pulmonary tuberculosis was also demonstrated for serum levels and levels found in bronchioalveolar lavage fluid [72] . subsequent studies showed higher levels of neopterin in serum and pleural effusions of patients with pulmonary tuberculosis compared to controls [73, 74] . peripheral blood mononuclear cells (pbmnc) from tuberculosis patients showed a significantly higher spontaneous production of neopterin. stimulation with phytohaemagglutinin or purified protein derivative did not yield higher neopterin production in pbmnc of patients with tuberculosis showing that it is not the production capacity for neopterin which is different [74] . elevated serum neopterin levels were also found in hiv infected patients with tuberculosis and decreased significantly on antituberculous treatment [40] . a relapse of tuberculosis was in 2 cases characterized by increase in neopterin levels. further studies compared neopterin levels in urine, serum, and bronchoalveolar lavage fluid and found that they correlate significantly in patients with tuberculosis [72, 75] . the elevation of neopterin in patients with tuberculosis was more pronounced in urine than in serum or bronchoalveolar lavage [75] . diseases. patients with pulmonary tuberculosis had significantly higher urinary neopterin levels compared to patients with lung cancer or pneumonia with more than twice the concentration reported in adults [74] . pleural fluid neopterin levels were investigated for its ability to differentiate between tuberculous and malignant pleural effusion and were found to be significantly higher in patients with pleural tuberculosis but performance characteristics including receiver-operating characteristics curve analysis was inferior to adenosine deaminase [76] . waiting for the results of susceptibility tests to select an effective antituberculosis drug regimen often causes a delay in effective treatment which can be disastrous, especially in children. because the x-ray changes tend to resolve very slowly and may get even worse after starting therapy because of a paradoxical reaction due to immune-reconstitution even in otherwise immune-competent patients, other than clinical status there is no reliable parameter to reflect success or failure of the drug regimen [77] . urinary neopterin levels declined on twice weekly measurements in all monitored patients with pulmonary tuberculosis on treatment and fell to below tolerance limits within 10 weeks of treatment in 6/10 patients [39] . measurement of serum neopterin levels in patients on treatment for microbiologically confirmed pulmonary tuberculosis confirmed this observation and showed a significant decline of levels to near normal levels within 6 months of treatment [41] . in the context of emerging multiple drug resistance and difficulties in monitoring compliance and drug absorption neopterin needs to be explored as a tool for monitoring of success in treatment of mycobacterium tuberculosis infection. it may also help to distinguish active from latent disease. people with hiv-m. tuberculosis coinfection with active tuberculosis responded with a reduction of plasma neopterin to antituberculotic treatment but neopterin levels remained above the baseline levels of hiv negative tuberculosis patients and levels were higher in patients with lower cd4 count [78] . the first study of neopterin levels in parasitic infections included measurements of urinary neopterin by hplc in patients with plasmodium falciparum and vivax infections including patients with low grade parasitemia [7] . all patients had elevated urinary neopterin levels compared to uninfected controls to a level of 664 to 5189 micromol neopterin/mol creatinine. levels in patients treated with quinine sulphate and levels in untreated patients were not significantly different. a subsequent detailed interventional study provided data on urinary neopterin levels in volunteers experimentally infected with plasmodium falciparum [79] . serial monitoring revealed that urinary neopterin levels were not elevated until peripheral blood parasite densities had increased through 3 to 4 cycles of intraerythrocytic schizogony. a sharp rise in urinary neopterin was detectable at the beginning of day 14 after infection. there was an increase one day after onset of fever. in one patient a urinary neopterin increase was noted without the occurrence of fever. neopterin production in falciparum malaria seems to be a direct effect of plasmodial antigens on monocytes/macrophages. in vitro studies showed that the monocytic cell line u937 could be stimulated to produce neopterin with lysates of plasmodium falciparum parasitized human erythrocytes and recombinant p. falciparum proteins [80] . at a cut-off point of 10.0 ng/ml, neopterin had a positive and negative predictive value of 0.38 and 0.98 for detection of severe falciparum malaria [81] . chloroquine treatment was followed by a reduction of urinary neopterin levels. when clinical disease resolved within 3-7 days of treatment, neopterin levels normalized rapidly [42] . neopterin levels in nonimmune patients and young children were higher than were those of semiimmune individuals. csf (csf) neopterin levels were investigated for ability to discriminate between different stages of cerebral trypanosoma (t.) brucei (b.) gambiense infection. in an investigation of 512 t. b. gambiense patients originating from angola, chad, and the democratic republic of the congo csf igm and neopterin were the best in discriminating between the two stages (s1 and s2) of disease with 86.4% and 84.1% specificity, respectively, at 100% sensitivity. when a validation cohort (412 patients) was tested, neopterin (14.3 nmol/l) correctly classified 88% of s1 and s2 patients, confirming its high staging power [82] . serum neopterin was also assessed as a disease marker in human schistosoma mansoni infection and levels were found to reflect the extent of hepatic involvement with higher levels found in patients with hepatomegaly. treatment with praziquantel led to a normalisation of serum neopterin levels as a result of a reduction of egg induced immunopathology [43, 44, 83] . the detection of new blood borne viruses including hiv and non-a non-b hepatitis viruses led to investigations into new ways of excluding transmission of blood borne viruses by transfusion of blood products. the government of tirol in austria introduced routine measurement of neopterin levels in all donated blood in 1986. a cut-off of 10 nmol/l was used and led to the exclusion of 1.6% of donors (total number of donors = 76587). the most common cause of elevated levels was in 123 (67%) of cases a viral respiratory tract infection. 6 donors with elevated neopterin had an acute toxoplasmosis and 4 had hiv infection or non-a-non b-hepatitis [84] . in another study 5.26% of 1767 donations with increased neopterin levels were positive for cmv igm indicating acute infection. 0.3% of patients with low neopterin levels had cmv igm. seroconversion was detected in 10 patients with initially elevated neopterin levels on a second serum sample indicating that neopterin may precede the appearance of cmv antibodies by 2-4 weeks [85] . a further study of austrian blood donors showed that neopterin levels were significantly higher in early compared to late infection or carrier state. all early infections (seroconversions) had elevated neopterin levels while only 17% of late and carrier states [86] . a recent study found that using a neopterin elisa 61% of cmv dna-positive samples had elevated neopterin levels [87] . with regard to other viruses 5.5% of donors with above normal neopterin had epstein-barr virus igm generating an almost threefold greater chance of acute ebv infection in donors with increased neopterin (odds ratio: 2.85 (95% confidence interval, 1.5-5.6). with regard to parvovirus b19 infection 73/1060 (6.9%) donors were found to be seropositive for parvovirus b19 igm [88] . a later study by the same group found no hpv dna positive results amongst 1200 patients with normal neopterin levels [89] . an investigation of the association of neopterin levels with chronic hepatitis c virus infection revealed that significantly more patients with elevated neopterin levels and hcv antibodies were hcv pcr positive for hcv rna (odds ratio: 3.76 = 0.002) [90] . neopterin is a nonspecific marker of activated cell mediated immunity involving release of interferon gamma. neopterin may be a useful marker for more accurate estimation of extent of disease and hence prognosis. knowledge of all potential causes of its elevation can overcome problems with reduced specificity in a patient known to have a specific infectious disease. longitudinal serial measurements in the same individual could overcome difficulties with interpretation in settings where chronic parasitic (malaria) or bacterial (tuberculosis) infections may elevate the baseline neopterin level and could allow monitoring of response to antiretroviral, antituberculous, and antiparasitic treatment in the absence of resistance testing and provide means to monitor compliance in the outpatient setting (see table 1 ). this is particularly important in the current context of emerging multiple drug resistance of hiv and mycobacterium tuberculosis. neopterin for which high quality elisa systems to measure urine and blood levels are commercially available is an underused marker in clinical practice and is suitable for introduction into the routine clinical laboratory practice. the author declares that there is no conflict of interests regarding the publication of this paper. neopterin measurement in clinical diagnosis potential role of immune system activation-associated production of neopterin derivatives in humans cytokine-stimulated gtp cyclohydrolase i expression in endothelial cells requires coordinated activation of nuclear factor-b and stat1/stat3 simple dipstick assay for semi-quantitative detection of neopterin in sera erhoehte ausscheidung von neopterin im harn von patienten mit malignen tumoren und mit viruserkrankungen urinary neopterin is elevated in patients with malaria clinical presentation of cmv infection in solid organ transplant recipients and its impact on graft rejection and neopterin excretion urinary neopterin levels in acute viral hepatitis immune activation during measles: beta 2-microglobulin in plasma and cerebrospinal fluid in complicated and uncomplicated disease neopterin levels during acute rubella in children the role of neopterin as a monitor of cellular immune activation in transplantation, inflammatory, infectious, and malignant diseases neopterin excretion during incubation period. clinical manifestation and reconvalescence of viral infection detection of serum neopterin for early assessment of dengue virus infection a preliminary study of neopterin as a potential marker for severe dengue virus infection influence of neopterin and 7,8-dihydroneopterin on the replication of coxsackie type b5 and influenza a viruses neopterin levels during the early phase of human immunodeficiency virus, hepatitis c virus, or hepatitis b virus infection immune stimulation by syphilis and malaria in hiv-2-infected and uninfected villagers in west africa markers predicting progression of human immunodeficiency virus-related disease prognostic significance of plasma markers of immune activation, hiv viral load and cd4 t-cell measurements the prognostic significance in hiv infection of immune activation represented by cell surface antigen and plasma activation marker changes highly active antiretroviral therapy (haart) and circulating markers of immune activation: specific effect of haart on neopterin serum neopterin changes in hiv-infected subjects: indicator of significant pathology, cd4 t cell changes, and the development of aids increased immune activation precedes the inflection point of cd4 t cells and the increased serum virus load in human immunodeficiency virus infection predicting clinical progression or death in subjects with early-stage human immunodeficiency virus (hiv) infection: a comparative analysis of quantification of hiv rna, soluble tumor necrosis factor type ii receptors, neopterin, and 2 -microglobulin serum neopterin level predicts hiv-related mortality but not progression to aids or development of neurological disease in gay men and parenteral drug users are plasma biomarkers of immune activation predictive of hiv progression: a longitudinal comparison and analyses in hiv-1 and hiv-2 infections? 2 -microglobulin and immunoglobulins are more useful markers of disease progression in hiv than neopterin and adenosine deaminase reduction of viral load and immune complex load on cd4+ lymphocytes as a consequence of highly active antiretroviral treatment (haart) in hivinfected hemophilia patients neopterin concentrations in cerebrospinal fluid and serum of individuals infected with hiv-1 levels of human immunodeficiency virus type 1 rna in cerebrospinal fluid correlate with aids dementia stage csf neopterin decay characteristics after initiation of antiretroviral therapy central nervous system immune activation characterizes primary human immunodeficiency virus 1 infection even in participants with minimal cerebrospinal fluid viral burden continuing intrathecal immunoactivation despite two years of effective antiretroviral therapy against hiv-1 infection cerebrospinal fluid neopterin: an informative biomarker of central nervous system immune activation in hiv-1 infection immune activation of the central nervous system is still present after >4 years of effective highly active antiretroviral therapy treatment benefit on cerebrospinal fluid hiv-1 levels in the setting of systemic virological suppression and failure neopterin as a marker of response to antiviral therapy in hepatitis c virus patients neopterin as an index of immune response in patients with tuberculosis neopterin, 2 -microglobulin, and acute phase proteins in hiv-1-seropositive and -seronegative zambian patients with tuberculosis serum interleukin-2 and neopterin levels as useful markers for treatment of active pulmonary tuberculosis neopterin as marker for activation of cellular immunity: immunologic basis and clinical application liver involvement in human schistosomiasis mansoni. regression of immunological and biochemical disease markers after specific treatment praziquantel in the treatment of hepatosplenic schistosomiasis: biochemical disease markers indicate deceleration of fibrogenesis and diminution of portal flow obstruction neopterin as a marker of c hepatitis in thalassaemia major serum neopterin levels in patients with replicative and nonreplicative hbv carriers serum neopterin levels in patients with hbv infection at various stages serum neopterin levels in children with hepatitis-b-related chronic liver disease and its relationship to disease severity value of serum procalcitonin, neopterin, and c-reactive protein in differentiating bacterial from viral etiologies in patients presenting with lower respiratory tract infections serum neopterin for early assessment of severity of severe acute respiratory syndrome diagnostic utility of crp to neopterin ratio in patients with acute respiratory tract infections intrathecal production of neopterin in aseptic meningo-encephalitis and multiple sclerosis immune activation during measles: interferon-and neopterin in plasma and cerebrospinal fluid in complicated and uncomplicated disease role of il-6 and neopterin in the pathogenesis of herpetic encephalitis cerebrospinal fluid neopterin concentrations in central nervous system infection cerebrospinal fluid neopterin is brain-derived and not associated with blood-csf barrier dysfunction in noninflammatory affective and schizophrenic spectrum disorders persistent intrathecal immune activation in hiv-1-infected individuals on antiretroviral therapy persistent intrathecal immune activation in patients with herpes simplex encephalitis value of urinary neopterin in the differential diagnosis of bacterial and viral infections evaluation of procalcitonin and neopterin level in serum of patients with acute bacterial infection evaluation of procalcitonin, neopterin, c-reactive protein, il-6 and il-8 as a diagnostic marker of infection in patients with febrile neutropenia neopterin as a prognostic biomarker in intensive care unit patients the value of neopterin and procalcitonin in patients with sepsis d-erythroneopterin plasma levels in intensive care patients with and without septic complications course of immune activation markers in patients after severe multiple trauma procalcitonin and neopterin as indicators of infection in critically ill patients neopterin production and tryptophan degradation in acute lyme neuroborreliosis versus late lyme encephalopathy assessment of diagnostic enzyme-linked immunosorbent assay kit and serological markers in human brucellosis is neopterin-a marker of cell mediated immune response, helpful in classifying leprosy immune responseassociated production of neopterin. release from macrophages primarily under control of interferon neopterin as a new biochemical marker for diagnosis of allograft rejection. experience based upon evaluation of 100 consecutive cases bal neopterin. a novel marker for cell-mediated immunity in patients with pulmonary tuberculosis and lung cancer neopterin in tuberculous and neoplastic pleural fluids neopterin as a marker for cell-mediated immunity in patients with pulmonary tuberculosis urinary neopterin measurement as a non-invasive diagnostic method in pulmonary tuberculosis pleural fluid neopterin levels in tuberculous pleurisy neopterin levels and pulmonary tuberculosis in infants incomplete immunological recovery following anti-tuberculosis treatment in hiv-infected individuals with active tuberculosis urinary neopterin in volunteers experimentally infected with plasmodium falciparum malaria antigene stimulate neopterin secretion by pbmc and u937 celts neopterin and procalcitonin are suitable biomarkers for exclusion of severe plasmodium falciparum disease at the initial clinical assessment of travellers with imported malaria csf neopterin as marker of the meningo-encephalitic stage of trypanosoma brucei gambiense sleeping sickness liver involvement in human schistosomiasis mansoni. assessment by immunological and biochemical markers serum-neopterinbestimmung zur zusaetzlichen sicherung der bluttransfusion neopterin screening and acute cytomegalovirus infections in blood donors acute cytomegalovirus infections in blood donors are indicated by increased serum neopterin concentrations high prevalence of cytomegalovirus dna in plasma samples of blood donors in connection with seroconversion increased prevalence of igm antibodies to epstein-barr virus and parvovirus b19 in blood donations with above-normal neopterin concentration human parvovirus b19 detection in asymptomatic blood donors: association with increased neopterin concentrations association between chronic hepatitis c virus infection and increased neopterin concentrations in blood donations key: cord-312418-e4g5u1nz authors: melillo, alessandro title: rabbit clinical pathology date: 2007-09-18 journal: j exot pet med doi: 10.1053/j.jepm.2007.06.002 sha: doc_id: 312418 cord_uid: e4g5u1nz with rabbit patients, as in other species, analyzing blood and urine samples can be useful and informative, although interpretation of the results is sometimes challenging. this article summarizes the interpretation of laboratory results from rabbits. hematological parameters can yield information about the red blood cell population and leukocyte response to stress and pathogens. biochemistry evaluation can be used to investigate liver, kidney, and other organ function, and urinalysis results may yield additional information about kidney function and electrolyte imbalances. serological tests are available for several pathogens of rabbits, including encephalitozoon cuniculi, although the significance of positive results and antibody titers is not clear. serum protein electrophoresis aids the understanding of protein disorders and the immune response to acute and chronic inflammation. r abbits can mask signs of illness or show few or confusing clinical signs. additional information may be gained from laboratory tests, and in-house analyzers can provide a complete profile with a small volume of the patient's blood. unfortunately, most of the published data on rabbit hematology and biochemistry values are descriptions of the effects of toxins on hematological and biochemical parameters of laboratory rabbits. there is little information available that describes the effect of clinical disease on the blood parameters of companion rabbits, or on the use of blood tests as diagnostic and prognostic indicators. the lack of biochemical data for pet rabbits is changing as practitioners collect information and researchers are more cognizant of diagnostic and prognostic hematologic indicators. the blood volume of a healthy rabbit is approximately 55 to 65 ml/kg, and 6% to 10% of the blood volume may be safely collected. many sites are described for blood collection in rabbits. cardiac punc-ture is used in laboratory rabbits but is not recommended for pet animals. both the marginal ear vein and central ear artery are easily accessible, but they may be difficult to access in some patients. collecting a sufficient volume of blood from these sites to perform all of the desired clinical tests may also be difficult, especially from dwarf breeds with small ears. sampling from ear vessels can occasionally result in thrombosis and subsequent avascular necrosis of parts of affected pinna tissue. blood may be collected from the cephalic vein, which is straight and easily accessible, but, because of the short antebrachium, occlusion of the vessel by encircling the limb at the elbow is difficult. the cephalic vein is also small and easily collapses. the jugular veins are large and allow for ample-sized blood volumes to be collected, but jugular phlebotomy can be stressful for rabbits and may require chemical restraint. the dewlap may interfere with jugular vein visualization, especially in obese does. an accessible and efficient site for blood sampling in rabbits is the lateral saphenous vein (fig 1) . most biochemical parameters of rabbits can be measured from serum or plasma. rabbit blood clots easily at room temperature and will coagulate quickly if not mixed with anticoagulant during collection. heparin is a suitable anticoagulant because it does not alter biochemical parameters, even though the anticoagulant/blood ratio is not always optimal. hemolysis can be prevented by letting the blood drop from the needle into the tube, but often, only a few drops can be collected before the blood clots. the technique can be improved by heparinizing syringes and needles through aspiration of a few drops of heparin into the syringe, then removing the excess with injection pressure. the small amount of heparin remaining in the needle prevents clotting without significant alterations in biochemical parameters. it is important to make several air-dried blood smears at the time of venipuncture before the anticoagulant in the tube and transport can modify red and white cell morphology. most of the standard blood stains work well. automated flow cytometry is reliable for analyzing most hematological parameters for rabbit patients. age, sex, breed, and circadian rhythms all affect hematological and biochemical parameters in rabbits. rabbits under 12 weeks of age have lower red blood cell (rbc) and white blood cell (wbc) counts. the total wbc and lymphocyte counts are lowest in the late afternoon and evening, when the heterophils and eosinophil counts rise. urea and cholesterol levels tend to increase at the end of the day. stress can alter many different hematological parameters (e.g., blood glucose). prolonged stress, such as transportation, unfamiliar noises, smell, chronic pain, and poor environment, can induce heterophilia, lymphopenia, and leukocytosis. simple handling does not induce this response, but several muscle enzymes including lactate dehydrogenase (ldh), aspartate aminotransferase (ast), and creatine kinase elevate after physical restraint of the rabbits, especially if they are fractious or unfamiliar with handling. sedation or general anesthesia can be helpful. isoflurane anesthesia does not appear to affect blood parameters in rabbits. hemolysis can induce several artifacts, such as decreased rbc and amylase values and increased ldh, ast, creatine kinase, total protein, and potassium levels. in-house analyzers can be very sensitive to hemolysis, thereby altering the true blood parameters of the patient. hematology results of rabbits can be difficult to interpret. most reference ranges are from experimental laboratory studies that are run on homogeneous groups of rabbits belonging to the same breed, strain, age, and environmental conditions. this can be very different from the clinician's situation of dealing with a heterogeneous population of pet rabbits. many texts amalgamate reference ranges from different sources to create ranges so wide that they include almost any result. another problem is that healthy pet rabbits are very hard to find, so samples from rabbits with an apparently acute condition may also show changes due to an underlying chronic problem, such as malnutrition, improper husbandry, or subclinical disease. for example, harcourt-brown and baker 1 showed that rabbits that were caged, fed on commercial mixes, and suffered from dental disease had consistently lower packed cell volumes (pcv), rbc counts, hemoglobin values, and lymphocyte counts in comparison with rabbits kept outside with a more natural diet and exercise. rabbit erythrocytes are typical mammalian anucleate biconcave discs with an average diameter of 6.8 m, which is midway between cats and dogs. 2 erythrocyte size varies between 5.0 and 7.8 m, which is often reported as a marked anisocytosis (fig 2) . the short life span (57 days) and high turnover of erythrocytes is reflected as polychromasia, which is not clinically significant. the presence of a few nucleated rbcs (1-2 ϫ 100 leukocytes) and the occasional howell-jolly bodies (fig 2) should be considered within the normal reference range for rabbits and not an indicator of cellular regeneration. the published reference range for pcv in the rabbit is 30% to 50%, but pet rabbits often have lower values of 30% to 40%. 2 values higher than 45% may indicate dehydration, which is usually linked to gastrointestinal (gi) stasis. combined pcv and total protein (tp) is useful to differentiate acute conditions from subclinical chronic diseases that have suddenly deteriorated. a pcv of less than 30% indicates anemia, especially if the rbc and hemoglobin levels are low as well. nonregenerative anemia associated with chronic disease is common in pet rabbits. otitis media, dental disease with or without abscesses, pneumonia, pododermatitis, mastitis, endometritis and pyometra, renal disease, and osteomyelitis are all examples of chronic infections that can be associated with nonregenerative anemia in pet rabbits. regenerative anemia is manifested by significant and rapid reticulocyte production and usually indicates blood loss. causes of external hemorrhage in rabbits include trauma and severe flea infestation. common internal causes include hematuria due to kidney or bladder stones or bleeding uterine adenocarcinomas or endometrial aneurysms in does. intravascular hemolysis is an unusual cause of regenerative anemia after ingestion of leaves and stems of potato plants and possibly other solanaceae. alliums (onion, garlic, and chives) may also cause heinz body anemia. 3 autoimmune hemolytic anemia has been reported in laboratory rabbits in association with lymphosarcoma, and isolated cases have been treated in private practice (harcourt-brown, personal communication, november, 2006). 4 lead toxicosis is a cause of regenerative anemia, characterized by many nucleated erythrocytes, hypochromasia, poikilocytosis, and basophilic cytoplasmatic stippling. 2 nucleated red cells of more than 1% to 2% of the rbc can be linked with the acute, septicemic phase of an infectious disease, although this is unusual because of the presence of a nonregenerative anemia from underlying chronic disease. a well-prepared air-dried smear is required for an accurate differential white cell count and evaluation of the cytological appearance of each cell type (figs 2-6). differential white cell counts and cell morphology can be used to develop a differential diagnoses list and to determine the general condition of the patient. interpretation of wbc data from rabbits is different from other domestic species including dogs, cats, and birds, in which a leukocytosis is the response to inflammation. with rabbits, although leukocytosis can be identified in cases that have been diagnosed with lymphosarcoma, it is not the usual response to inflammation. laboratory investigations have shown no increase in the total number of circulating leukocytes in rabbits injected with bacteria or yeast, in domestic carnivores, anisocytosis usually reflects the presence of reticulocytes and indicates regenerative anemia. this is not true in rabbits in which 1% to 4% of the circulating erythrocytes may be reticulocytes. the occasional howell-jolly body is not clinically significant either. rabbit clinical pathology although fever, increased plasma cortisol concentrations, neutrophilia, and lymphopenia were observed. 5, 6 in clinical practice, rabbits with sepsis can show a variety of responses including a neutropenia, normal neutrophilic count, or a mature neutrophilia, accounting for more than 90% of wbcs. band neutrophils appear to be a rare finding in clinical infection, and the absence of a left shift does not rule out an infectious problem. an alteration of the neutrophil/lymphocyte ratio showing a relative neutrophilia coupled with lymphopenia may indicate a response to infection. this ratio is approximately 1:1 in an adult healthy rabbit, but stress can alter the neutrophil/lymphocyte ratio. transport, waiting in a room full of unfamiliar sounds and smells, or even restraint for clinical examination can change the ratio. gentle clinical examination and blood collection do not seem to affect the differential white cell count, whereas more prolonged stress like travel or exposure to barking dogs in the waiting room can induce a lymphopenia and relative neutrophilia, that may persist for up to 24 to 48 hours. 7 the total wbc can be used to further characterize acute stress from chronic stress (e.g., malnutrition, improper husbandry, prolonged social stress, dental disease), as both a leukopenia and lymphopenia are eosinophil. rabbit eosinophils measure 10 to 16 m in diameter and have a purple bilobed or horseshoe-shaped nucleus. the cytoplasm is obscured by so many granules that the cell looks orange-pink and foamy. the abundance of granules is the main difference between the eosinophils and the neutrophils. removal of histamine and histaminelike toxins is the most important function of the eosinophils, suggesting that they therefore play an important role in controlling allergic reactions. more common with chronic stress. chronic stress is often reported as general leukopenia and lymphopenia in a rabbit's differential wbc evaluation. the primary role of the lymphocytes is to respond to those activities that stimulate the immune system. in rabbits, lymphocytes are primarily found in the blood, spleen, bone marrow, lymph nodes and the lymphatic tissues in the gi tract. the number of circulating lymphocytes is a balance between the cells entering and leaving the bloodstream and does not necessarily reflect a change in lymphopoiesis. in rabbits, it has been shown that an increase in adrenaline levels (acute stress) induces lymphocytosis, whereas raised cortisol levels (chronic stress) leads to lymphopenia. viral diseases may result in a normal or higher lymphocyte count. other causes of lymphocytosis are lymphoma and lead poisoning. 8 eosinophilia in rabbits can occur when tissues rich in mast cells, such as the skin, lungs, gi tract, or uterus, are involved in disease. eosinophilia can indicate the presence of an abscess and may be found during wound healing. in other species, eosinophilia is linked to parasitic diseases, especially when larvae are moving through tissues, but this is rare in domestic rabbits. encephalitozoon cuniculi does not stimulate an eosinophilic response. in clinically healthy rabbits, a very low eosinophilic count, even 0, is a common finding. high levels of cortisol (chronic stress) can induce eosinopenia. monocytosis is linked with chronic inflammation (e.g., abscesses, mastitis, tympanic bulla empyema). however, the absence of a monocytosis does not rule out inflammation. monocyte counts within the normal range are a common finding in rabbits with osteomyelitis due to dental disease. the diameter of rabbit basophil measures 8 to 12 m. its nucleus is less segmented than the eosinophil or heterophil and difficult to see because of the many deep purple granules that obscure the light gray cytoplasm. as in other species, the function of the rabbit basophil is not fully understood, but these cells are often present in large numbers of rabbit blood smears. basophilia with concurrent eosinophilia has been described in rabbits with chronic skin problems (e.g., atopy, pyoderma). 3 in rabbits, ast is widely distributed in many tissues. it is present in cardiac tissue and muscle, as well as the liver, and has a short half-life (5 hours). although higher ast levels may be found in patients diagnosed with liver damage, struggling during collection or hemolysis of the sample also raises ast levels. creatine phosphokinase levels also increase after restraint and are purely muscular in origin. ldh is produced by muscle and liver cells in rabbits and therefore is not beneficial as a diagnostic tool for many disease evaluations. reference levels for ast, ck, and ldh can be found in table 1 . in many mammalian species, alanine aminotransferase (alt) is a useful indicator of hepatocyte damage because of its specificity for liver tissue and its long half-life (45-60 hours in dogs). in rabbits, alt is not as useful as an indicator of liver damage as with other species because, like other herbivores (e.g., horses, cattle, guinea pigs), alt is not liver specific and has a shorter half-life (around 5 hours). however, alt concentrations are not affected by restraint and therefore can be used as a diagnostic tool. slightly increased alt levels are a common finding in apparently healthy rabbits. mildly increased alt levels in healthy rabbits have been attributed to exposure to low concentration of toxic substances, such as resins in wood-based litter or aflatoxins in food. 9 raised alt levels (with alkaline phosphatase (alp), bilirubin and glutamyltransferase [ggt]) can be associated with hepatic lipidosis or may be found in patients with hepatic coccidiosis (eimeria steidae) or torsion of a liver lobe. alp is a widely distributed enzyme; liver and bone contain the highest concentrations, but it is also found in bowel epithelium, kidney tubules, and placenta. a physiological cause of high-serum alp concentrations is osteoblastic activity in growing animals. animals with bone lesions will show raised alp levels. as a liver enzyme, alp does not increase because of hepatocellular damage but is indicative of bile stasis (e.g., hepatic coccidiosis, liver abscesses, neoplasia, lipidosis). extrahepatic causes, such as abscesses or neoplasia, can cause bile stasis by occluding the bile ducts. rabbits produce 2 alp isoenzymes in the liver. an intestinal isoenzyme is quite abundant, so serum alp concentrations are actually the sum of these 3 isoenzymes, which may explain why many reference ranges are vague and wide and why raised alp levels in clinically healthy animals are a common finding. alp does have a diagnostic value because it is not altered by restraint and thus is considered a good indicator of real tissue damage. reference levels for alt and alp can be found in table 1 . ggt is a useful indicator of chronic liver disease with bile stasis in horses, cattle, and domestic carnivores; however, in rabbits, the activity of ggt is low. activity of this enzyme is high in the kidney, yet renal ggt does not reach the circulation because it is eliminated with the urine. therefore, elevated ggt levels in the rabbit are most often linked to obstructive lesions of the bile ducts, but with a lower sensitivity than that found in other species. reference levels for ggt can be found in table 1 . bile pigments are produced during the breakdown of the heme molecule by hepatocytes and are excreted into the bowel. bilirubin levels reflect either hepatocyte or bile tract function. rabbits produce a large amount of bile for their weight, and the main compound is biliverdin, for which there is no commercial laboratory test. about 30% of biliverdin is converted to bilirubin, which is found in the blood in measurable amounts. the main cause of hyperbilirubinemia is bile flow obstruction. in young rabbits, hepatic coccidiosis is the most common cause of biliary obstruction, while in adult rabbits it is biliary neoplasia. cellular causes of hyperbilirubinemia, and rarely icterus, are aflatoxicosis (e.g., eating moldy food), which induces hepatic fibrosis (alt is usually raised too), and viral hemorrhagic disease, which causes acute hepatic necrosis with concurrent high levels of all hepatocellular enzymes. if the rabbit survives long enough, icterus may be seen. bilirubin may also be increased in diseases that cause hemolysis (e.g., immune-mediated hemolytic anemia). in other species, a comparison of preprandial and postprandial serum bile acid concentrations is used as an indicator of liver function. in rabbits, cecotrophy makes it almost impossible to fast a rabbit for the preprandial sample, so bile acid measurement is not a routine procedure in clinical practice, although persistently raised bile acids have been reported in association with hepatic disease. 10 cholesterol is synthesized in the liver or obtained from the diet and is a precursor of steroids. it is metabolized by the liver and excreted in bile. in carnivores, hypercholesterolemia is linked with several metabolic diseases, such as hypothyroidism, hyperadrenocorticism, diabetes, and hepatopathies. hypocholesterolemia indicates liver failure. cholesterol and triglycerides levels peak after a meal and fasting is needed for accurate measurement, which limits their diagnostic value in rabbits because of cecotrophy. abnormal levels of cholesterol and tri-glycerides can be related to a diet rich in fats, obese patients, or hepatic disease. in anorexic patients, hypercholesterolemia carries a poor prognosis because it indicates end-stage hepatic lipidosis. hypercholesterolemia has also been linked with pancreatitis, diabetes mellitus, nephrotic syndrome, and chronic renal failure. 3 decreased cholesterol levels in rabbits might be found in cases of liver failure, chronic malnutrition, and even pregnancy (up to 30% below the range). unlike other species, amylase is an almost pure pancreatic enzyme in rabbits, with little or no content in salivary glands, intestinal tissue, or liver. therefore, raised amylase levels in rabbits reflects pancreatic damage from pancreatitis, pancreatic duct obstruction, peritonitis, or abdominal trauma. renal failure can also cause hyperamylasemia because this enzyme is cleared by renal filtration. corticosteroids (exogenous or endogenous) can raise amylase values in serum, whereas hemolysis lowers it. there is little information on the function and diagnostic value of lipase in rabbits. increased lipase values may indicate cellular damage to the pancreas as it does in other species. as with amylase, lipase is artifactually elevated by corticosteroids. urea is a by-product of protein catabolism and is excreted by the kidneys into the urine. urea levels in rabbits depend on the circadian rhythm (peak in late afternoon and early evening), quantity and quality of proteins in the diet, nutritional status, liver function, intestinal absorption, urease activity of the caecal flora, and hydration status. often, small changes in urea levels are difficult to interpret. reference ranges have been determined from laboratory rabbits fed on a standardized diet and bled at the same time of the day, whereas clinicians see pet rabbits fed on a variety of foods and samples are taken at random. slight elevations in blood urea are a common finding. reference levels for urea can be found in table 1 . creatinine is a protein catabolite that is produced from the muscle creatine and excreted by glomerular filtration at a constant rate. creatinine is a more reliable test of renal function than blood urea. in rabbits, prerenal azotemia can be caused by dehydration because rabbits have a limited ability to concentrate urine. only a few hours without drinking or losing fluids, as in cases of ileus or diarrhea, may cause an increase of urea and creatinine to levels compatible with renal failure. urea and creatinine levels rapidly return to normal once the dehydration deficit is corrected. stress can also induce shock and cardiac disease, classified as prerenal disease, causing a decrease in renal perfusion. another potential cause of prerenal azotemia is gi hemorrhage, which results in increased protein digestion. azotemia is also indicative of renal disease, usually affecting the rabbit patient in association with hyperkalemia or hypokalemia, hypercalcemia and coexisting hyperphosphatemia, nonregenerative anemia, and isostenuric urine. the most common cause of renal failure in rabbit patients is e. cuniculi, which causes granulomatous and then fibrotic lesions in the renal parenchyma. other possible causes of renal failure are chronic interstitial nephritis, glomerulonephritis, pyelonephritis, nephrolithiasis, renal cysts, and lymphosarcoma. postrenal azotemia can occur because of obstruction to urine flow as a complication of bladder sludge or urolithiasis. an abdominal radiograph is mandatory in any azotemic rabbit. blood urea levels below the reference range indicate hepatic insufficiency or muscle mass loss (e.g., dental disease). reference levels for creatinine can be found in table 1 . glucose metabolism in rabbits is different from dogs or cats. not only do rabbits eat continuously during the day, but they also use volatile fatty acids produced by cecal flora as a primary energy source. a fasting blood sample is impossible to obtain because rabbits ingest fecal pellets. a rabbit that is not given food can continue to ingest cecotrophs. it has been shown that 4 days of starvation does not reduce blood glucose levels in rabbits. 11 diabetes mellitus is rare in rabbits, although hyperglycemia is a common finding and may be associated with glucosuria. reports of confirmed diabetes mellitus are from laboratory strains bred as a model for human diabetes. clinical signs commonly observed in pet rabbits with diabetes mellitus are polyphagia, polyuria, polydipsia, very high blood glucose levels (ͼ500 mg/dl), and glycosuria with significantly elevated glycosylated hemoglobin and raised triglycerides; however, obesity and ketoacidosis are not observed. 12 in clinical practice, most cases of hyperglycemia are due to stress (e.g., transport, handling, venipuncture, underlying disease). a marked hyperglycemia (around 350 mg/dl) is reported in cases of acute intestinal blockage by a foreign body. 10 early mucoid enteropathy may be associated with hyperglycemia. in rabbits with gi stasis, hyperglycemia carries a bad prognosis because it may indicate hepatic lipidosis. other causes of raised serum glucose levels are traumatic or hypovolemic shock and hyperthermia. acute pancreatitis could cause blood glucose abnormalities, even though the role of the pancreas in glucose metabolism in rabbits is less important than in other species. glucocorticoids and other drugs can raise blood glucose. hyperadrenocorticism has not been described in rabbits. hypoglycemia is an important finding. in anorexic patients, it indicates that the rabbit is using adipose tissue and is at risk of developing hepatic lipidosis. hypoglycemia may occur in terminal mucoid enteropathy, liver failure, or other chronic diseases. rabbits with acute sepsis may be hypoglycemic too. 3 insulinoma has not been described in rabbits. reference levels for glucose can be found in table 1 . the complex gi physiology and the limited ability of the kidneys to correct acid-base alterations make rabbits susceptible to electrolyte imbalances. any disturbance of serum electrolytes should alarm the clinician to possible pathology of the digestive or excretory system. anorexia can rapidly lead to metabolic acidosis. the diagnostic value of sodium for rabbit patients is low. hypernatremia can be due to dehydration or loss of fluids (e.g., diarrhea, peritonitis, burns, myiasis). hyponatremia is usually associated with polyuric renal failure (acute or more commonly chronic), when urine flow in the renal tubules is too fast to impede the sodium-potassium exchange. lipemia or hyperproteinemia can artifactually decrease sodium levels in serum. reference levels for sodium can be found in table 1 . potassium is homeostatically important because it is essential for maintenance of membrane potential. changes in membrane potential can be lethal (e.g., impaired contractility of myocardial cells due to hyperkalemia can cause arrhythmias and cardiac arrest). intracellular and extracellular potassium levels are maintained by a complex mechanism of exchange between cells and microenvironment, and regulated by several hormones such as aldosterone, insulin, and catecholamines. hypoadrenocorticism has not been described in rabbits. instead, raised potassium levels are more often due to acute renal failure or urine flow obstruction. severe tissue damage can also cause hyperkalemia by dispersing potassium into the extracellular space. for the same reason, hemolysis (e.g., intravascular, bad sampling technique, letting the sample wait too long before separating the serum) can artifactually raise potassium levels. another indirect cause of higher potassium levels in serum is metabolic acidosis, which increases the exchange of potassium ions across the cell membrane. reference levels for potassium can be found in table 1 . causes of hypokalemia in rabbits include dietary insufficiency and loss of fluids from the gi system (e.g., saliva, mucoid diarrhea) or the kidneys (e.g., renal failure, diuretic drugs). stress-induced increases in catecolamine levels can also cause hypokalemia. alkalosis, although rare in rabbits, decreases the blood potassium concentrations by stimulating the cellular uptake of potassium ions. hyperproteinemia and lipemia can artifactually decrease blood potassium levels that may present clinically as sensory depression and muscle weakness. a correlation between low potassium levels in blood and "floppy rabbit syndrome" has been reported. 10 in other herbivorous species (e.g., horses), blood potassium levels fluctuate widely, depending on physical activity or even on the quantity of saliva produced during the meals. 13 these examples of serum potassium level fluctuation are purely physiological and may occur in rabbits. calcium is found in blood either bound to serum proteins or ionized free in the serum. most laboratories report a total serum calcium value, which is the sum of bound and ionized calcium. ionized calcium is a more precise measurement but is a more difficult and expensive parameter to test. total serum calcium is influenced by dietary intake, serum protein levels, and other metabolic conditions. calcium metabolism in rabbits is different from that in other animals. blood calcium levels are influenced more by the calcium content of the diet than in the dog or the cat. rabbits absorb calcium in proportion to the concentration of the ion in the gut, and the kidney eliminates the excess. vitamin d is not important in calcium absorption if dietary levels are high, yet it does play an important role if dietary levels are low. vitamin d is also important in calcium distribution within the body. as with other species, parathyroid hormone regulates blood calcium levels, but the level at which calcium is moved from the blood to the bones is high in rabbits. consequently, blood calcium levels are higher, and the normal range is broader than that in other species. growing youngsters and pregnant does use more calcium, resulting in lower blood calcium concentrations. in these rabbits blood calcium concentrations rarely rise above 14 mg/dl, even when fed calcium-rich diets, whereas adult rabbits on a varied diet can show calcium levels up to 16 to 17 mg/dl ( table 1) . the urinary excretion rate of calcium is around 45% to 60% for rabbits, whereas most mammals excrete no more than 2% of their calcium through renal filtration. this predisposes rabbits to the formation of sludge and stones in the rabbit urinary system. although the constant excretion of calcium could be a cause of renal failure in rabbits fed unbalanced diets, hypercalcemia is also a consequence of renal disease in rabbits because of the inability of the kidney to eliminate excess calcium. measurement of blood calcium is essential to diagnose and treat renal disease in rabbits. hypocalcemia is rare but is reported in rabbits. the most frequent cause of low blood calcium levels is hypoalbuminemia due to poor nutrition. hypocalcemic seizures have been described in late-pregnant and lactating does. 13 phosphorus is involved in many enzymatic systems in rabbits, but its main function is contributing to the proper formation of bones and teeth. because phosphorus is present inside cells, blood phosphate concentrations are easily increased by hemolysis (e.g., spontaneous or sampling problems). blood phosphate values should always be evaluated with blood calcium levels to determine the mineral balance in patients diagnosed with urinary tract stones, dental disease, or other signs of nutritional secondary hyperparathyroidism. because the kidney is the main organ involved in phosphorus balance by regulation of glomerular filtration and tubular reabsorption, blood phosphate levels can be an indirect measurement of kidney function and, in general, parallel azotemia. serum phosphorus levels can be elevated as a result of prerenal, renal, and postrenal effects. hyperphosphatemia usually indicates chronic kidney failure (a loss of more than 80% of nephrons) given that serum phosphorus levels are normalized by compensatory mechanisms in early-onset renal disease. hyperphosphatemia may also be an indicator of soft tissue trauma. reference levels for phosphorus can be found in table 1 . hypophosphatemia is not rare, but its clinical significance is unknown at this time, with dietary deficiencies or reduced intestinal absorption possibly involved. total protein, consisting of the sum of albumin and globulin, is an important parameter in any species of animal. many factors (e.g., age of the animal, reproductive status, pregnancy) can affect tp levels. total serum protein levels can be artifactually raised by hemostasis at the sample site through fluid loss because of digital pressure on the vessel from which the blood is collected. reference levels for tp, albumin and globulins can be found in table 1 . the main cause of hyperproteinemia in rabbits is dehydration. raised tp levels may also indicate a chronic infectious or metabolic process. measuring albumin and globulin fractions helps to differentiate the causes of hyperproteinemia. hypoproteinemia is usually due to chronic malnutrition or protein loss. if both albumin and globulin are low, hemorrhage or protein loss through exudative skin lesions such as burns or flystrike should be considered. other possible causes must be examined in cases of hypoalbuminemia with normal or raised globulins. because the liver is the only site of albumin synthesis, a lowered albumin level may indicate an advanced hepatic disease, such as hepatic coccidiosis (e stiedae) or scarring and necrosis due to the migrations of cysticercus (taenia) pisiformis larvae. protein-losing nephropathies (glomerulonephropathy) and enteropathies are rarely diagnosed in rabbits. a common cause of hypoalbuminemia in pet rabbits is chronic malnutrition either from poor diet or advanced dental disease. all causes of reduced cecotrophy (e.g., dental disease, obesity, back pain) can be reflected as low protein, especially albumin levels. a method of evaluating serum proteins is by electrophoresis (eph), which divides the globulins into distinct fractions. in acute disease, alphaglobulins are elevated; consequently, a rabbit with an alpha-globulin peak may have a bacterial infection or a developing abscess, and/or present febrile. the beta portion of globulins consists of several proteins classified as "acute-phase" proteins including fibrinogen. fibrinogen correlates with inflammation in rabbits, although the correlation is not as evident as in other species. plasma is preferable to serum for eph because it does include fibrinogen. gamma-globulins are mainly antibodies, and a peak of this fraction indicates a subacute to chronic inflammation, especially when associated with a bacterial infection. coronavirus infections lead to an impressive increase of rabbit globulins as feline coronavirus does in cats, but this disease is probably limited to laboratory set-tings. the correlation between eph curves and different rabbit pathologies is unknown and currently under investigation. there are serological tests for antibodies against e cuniculi, toxoplasma gondii, treponema cuniculi, myxomatosis, viral hemorragic disease, and pasteurella multocida, although their availability varies in different parts of the world. the most common serological assay used in private practice is for the antibodies to e cuniculi. laboratory studies have shown that infected rabbits start to develop a measurable serological immune response 4 weeks after infection, 2 weeks before e cuniculi is found in the kidney or in the urine, and at least 8 weeks before any brain lesion. 14, 15 if a serologic test for e cuniculi is negative for a neurological rabbit patient, one could assume, with confidence, that this patient does not have the disease. if the test is positive, neurological signs may be due to e cuniculi or a different disease process. the seropositive result may be due to past exposure to the pathogen, which is common within most rabbit populations. further laboratory tests may be helpful to determine a definitive diagnosis. a correlation between antibody titers and neurological signs is hard to prove. a rising titer after 2 weeks is often considered diagnostic in suspect cases. the kidney is a target organ for e cuniculi, so evaluation of renal function and structure by biochemistry, urinalysis, and ultrasound may aid one's diagnostic overview. inflammatory changes in the cerebrospinal fluid are suggestive of e cuniculi but are nonspecific. at the present time, the definitive diagnosis of rabbit encephalitozoonosis requires histopathology or isolation of the spores from the urine by microscopy or polymerase chain reaction assay. although serology for t cuniculi can be used to screen a breeding facility, it has little value in clinical practice. at least 3 months are needed for antibody levels to become measurable in the blood even with clinically evident dermatological lesions. this substantial lag in the development of antibody titers can lead to false-negative test results. positive titers without skin lesions can indicate subclinical disease or previous infection in which the skin lesions are missed. antibody titers fall rapidly once the rabbit is treated for t cunicoli. 2 serological testing for p multocida is available in some countries, but its use in clinical practice is limited. a single positive response has no clinical significance because many rabbits harbor the bacteria in the nasal cavities without illness. a positive titer can also indicate previous exposure to the bacteria rather than clinical infection. repeat testing after 2 weeks, while looking for a rising titer, can be helpful in obtaining a correct diagnostic evaluation. results from rabbits younger than 2 months can be difficult to interpret because maternal antibodies are still present. a high antibody titer is not protective and may indicate subclinical disease. urine samples can be collected from spontaneous micturition, bladder expression, catheterization (quite difficult because sedation/anesthesia is required), or cystocentesis, with the latter being the preferred method for obtaining a sample for bacterial culture. care should be taken when performing a cystocentsis, because microtrauma to the bladder wall can stimulate local mineralization and it is possible to puncture the cecum, an enlarged uterus, or even an abscess. if bacteriology is not required, free catch from a clean litter tray is the easiest way to obtain a urine sample. manual expression of the bladder should be done carefully because the wall is thin and ruptures easily, especially if an obstruction is present or the rabbit struggles. in cases of chronic cystitis, the risk for bladder rupture is lower because the bladder is thickened. it is preferable to collect the first urine of the morning and to run the analysis as soon as possible. normal rabbit urine is dense and rich in minerals, so it should be centrifuged or filtered for biochemical analysis or examinations. clear urine indicates low calcium excretion, which may be pathological because of renal failure or physiological in the case of growing or lactating rabbits. the color may range from light yellow to reddish brown. in most cases, dark urine is caused by dietary pigments; however, the urine should be checked for hematuria, which may be caused by uroliths, urinary tract inflammation/infection, uterine problems, or anticoagulants. test dipsticks work well to evaluate blood, glucose, ketone, and ph levels in rabbit urine but are not accurate for other parameters. glucose may be found as a consequence of stress hyperglycemia; checking urine collected at home can help to rule out stress-related problems. true glucosuria indicates altered energy metabolism, such as hepatic lipidosis, or, very rarely, diabetes mellitus. ketones are always abnormal and indicate anorexia (even short duration), hepatic lipidosis, pregnancy toxemia, or diabetes. the ph of rabbit urine tends to be high (7.5 to 9) in rabbits fed a correct diet. acidic urine indicates acidosis due to anorexia, fever, pregnancy toxemia, or hepatic lipidosis. it is possible that, because a normal kidney would eliminate acidic urine in cases of acidosis, the finding of alkaline urine in an anorexic rabbit could indicate compromised renal function. specific gravity (sg) indicates the ability to concentrate urine. a refractometer is a more reliable tool to measure sg than dipsticks. most normal rabbit urine is quite dilute, with an average sg of 1.015 (range, 1.003-1.036). prerenal azotemia is associated with raised sg (ͼ1.030), whereas true azotemia linked to renal failure is associated with dilute urine (sg ͻ1.013). urine-specific gravity is useful if it is examined alongside urine protein concentrations. protein traces in clinically normal rabbits, especially the young, are not significant, whereas a dilute urine (ͻ1.020) with proteins is very significant. proteinuria appears earlier than biochemical changes in renal disease, making this test useful in clinical practice. measuring urine proteins/urine creatinine ratio (ͻ0.6 is suggested as normal) may further improve the sensitivity of urine-specific gravity testing. sediment examination can differentiate normal urine that is rich in crystals from sludge. after centrifugation, normal crystals should resuspend when shaken, while sludge remains as a solid mass. cytology is not very different from that of other mammals; a small number of leukocytes are considered normal in rabbits. gram or trichrome stains can reveal e cuniculi spores. 3 parathyroid hormone, hematological and biochemical parameters in relation to dental disease and husbandry in pet rabbits laboratory medicine: avian and exotic pets notes on rabbit internal medicine the biology of laboratory rabbit alteration of sleep in rabbits by staphylococcus aureus infection hematological effects of exposure to three infective agents in rabbits physiological stabilization of rabbits after shipping rabbit surgical pathology effects of t-2 toxin on ovarian activity and some metabolic variables of rabbits textbook of rabbit medicine the anatomy, physiology and the biochemistry of the rabbit spontaneous diabetes mellitus in the new zealand white rabbit veterinary laboratory medicine serological and histological studies on adult rabbits with recent naturally acquired encephalitozoonosis ferrets, rabbits and rodents clinical medicine and surgery key: cord-018623-of9vx7og authors: saghazadeh, amene; rezaei, nima title: the physical burden of immunoperception date: 2019-04-27 journal: biophysics and neurophysiology of the sixth sense doi: 10.1007/978-3-030-10620-1_10 sha: doc_id: 18623 cord_uid: of9vx7og the previous chapter introduced the immunoemotional regulatory system (immers). also, there was a brief discussion about psychological states/psychiatric disorders that so far have been linked to the immers. the present chapter considers another aspect of the immers in which physiological states/physical diseases can be fit to the immers. such as pemphigus [10, 11] . further, human studies provided evidence pointing to the increased development of emotional problems and edr-related disorders in patients with various types of aids, such as sle and multiple sclerosis (ms), in a disease state/severity-dependent manner [12] [13] [14] [15] [16] [17] . for example, among patients with childhood-onset sle, 95% manifest neuropsychiatric sle (nsle). mood and anxiety disorders were the most common psychiatric conditions with the prevalence rate of 60% and 20% [13] . even about 40% of patients with sle without cns manifestations suffer from psychological distress compared with 6% in controls. it is, thus, not surprising that both emotional coping and depressive symptoms were correlated with non-nsle [14, 15] . interestingly, there was an increased activation of the brain regions related to emotion regulation/processing (e.g., the amygdala and superior temporal) in sle patients. however further analyses led to identifying this increased activity of emotional circuit as a consequence of cns involvement by sle [18] . among patients with ms, emotional troubles were more than twofold more likely to occur in patients who had an exacerbation or progressive nonremitting ms compared to stable patients. this was reflected by an increased rate of using emotion-focused coping styles in patients with relapsingremitting multiple sclerosis (rrms) compared to stable patients [16, 17] . mood disturbance was correlated negatively with sil-2r levels and positively with joint pain in patients with ra [19] . consistent with data from human studies, animal experiments have also supported the link between emotionality-related behaviors and aids. clearly, aids result from immdr. interestingly, the aids-related immdr has been observed in the specific brain regions associated with emotional behaviors, particularly anxiety-and depressive-like behaviors [20] . in this manner, the link between aids and immers is strengthened. a high rate of increased emotionality and emotional-like behaviors in aids led to propose the term autoimmune-associated behavioral syndrome (aabs). studies emphasize the pivotal role of cytokines and neuroendocrine factors in the pathogenesis of aabs [21] . b-cell-activating factor (baff) transgenic mice model, which is used as an experimental model of systemic lupus erythematosus (sle), rheumatoid arthritis (ra), and sjögren syndrome, exhibited an anxious phenotype along with the following changes in immune brain signaling, such as increased igg titers in the hippocampus, hypothalamus, and cortex and increased cd68 (as a maker of activated microglia/macrophages) and gfap (as a maker of activated astrocytes) immunoreactivity in the hippocampus in mice at 4.5-5 months of age, but not in young (2 months of age) mice [22] . eae models of ms showed an increase in levels of il-1β and tnf-α in the hypothalamus. this indicates an inflammatory central basis behind anxiety-and depressive-like behaviors [20] . these emotional deficits were shown to display before the onset of ms [20, 23] . consistently, behavioral problems usually manifest before symptoms of impaired cognitive and motor performance in dementia. moreover, this model showed an early (at day 4) and a meaningful increase in circulating cytokine levels and cd3 + t cell counts. of note, these inflammatory markers began to decrease in the periphery (at day 8) almost when their infiltration in the cns (at day 10) started [23] . in the mrl-lpr model of aid, which is a well-documented model of emotional deficits [24, 25] , a reduced preference to glucose, and as an index of emotionality, was detected in 5-to 6-week-old mice [26] . this deficit could be diminished by immunosuppressive treatment with cyclophosphamide and was pronounced by means of chronic administration of il-6 [26] . along with psychosocial stressors, either chronic and acute, and social networkrelated factors (i.e., social ties, social conflict, and social support), the experience of unpleasant emotions, including anger, depression, sadness, and stress, or, in general, extremely exciting emotions, often promptly, pulls susceptible individuals into a steep road leading to cardiovascular events, particularly acs (for review see references [27] [28] [29] ). for example, emotional stress is ranked as the second most common neuropsychological cause of acute myocardial infarction (ami) owing to its record in approximately 40-50% of these patients [30, 31] . at the molecular levels, these patients have shown increased levels of the proinflammatory cytokines (e.g., tnf-α, il-1, il-2, il-6, and il-18) and decreased levels of the anti-inflammatory cytokine (il-10). thus, it is not surprising that the inflammatory response and respective cytokines are supposed as one of the possible mechanisms linking the experience of negative emotions or er-related disorders and the progression of cardiovascular diseases, of course along with the neuroendocrine system and apoptosis signaling pathways [27, 30, [32] [33] [34] [35] . it is to be noted that when patients with cardiovascular diseases are stratified according to their emotional background, some cytokines are more highlighted than others, for example, tnf-α and il-10, but not il-6, considering depressive symptoms in chf patients [35] . cognitive reappraisal is found to correlate positively with the engagement of the lateral and prefrontal regions and inversely with the engagement of the amygdala and medial orbitofrontal cortex [36] . the role of the inflammatory cytokine il-6 in the progression of cardiovascular diseases is widely appreciated [37] . studies show that il-6 is significantly involved in efforts to arbitrate between the sides of the relationship between the reappraisal-related activation of the dorsal anterior cingulate cortex and preclinical atherosclerosis (evaluated by carotid artery intima-media thickness and inter-adventitial diameter) in healthy individuals [38] . the normative aging study carried out a 3-year follow-up study in older men (mean 60.3 ± 7.9 years). there was a dose-response relationship between negative emotions, evaluated by the minnesota multiphasic personality inventory (mmpi), and incidence of coronary heart disease (chd) within the duration of the study (p = 0.005) [39] . meanwhile, the circulating levels of il-6 were positively associated with the reappraisal-related activation of the dorsal anterior cingulate cortex in healthy subjects [38] . however, higher reappraisals and suppressions were positively and inversely associated with the serum levels of crp, respectively [40] . on the other hand, the reflection of watching the ice hockey match, as a real-life emotional excitement, on serum levels of endothelin-1 (et-1) and il-6 was more pronounced in spectators with coronary artery disease compared with healthy spectators [41] . patients with type d personality display concurrently two absolute opposite, positive and negative, tendencies towards the experience and the expression of negative emotions by themselves and in front of other people, correspondingly. meta-analysis studies and comprehensive literature reviews have revealed that this type of personality is positively associated with contracting cardiovascular conditions and their consequent mortality and morbidity, as well as with a constellation of non-cardiovascular complaints (for details, see [42] [43] [44] ). also, individual studies, either cross-sectional or follow-up, have providence evidence of increased levels of proinflammatory cytokine tnf-α and its receptors, stnfr1 and stnfr2 [45] [46] [47] [48] , an enhanced il-6/il-10 ratio, and decreased levels of anti-inflammatory cytokine, il-10, in chf patients with type d personality compared to those without type d personality [45] . interestingly, in chf patients, the inflammatory effect of type d personality appears to resemble closely the effect of aging. there was a similar increased pattern of stnfr1 and stnfr2 in younger chf patients with type d personality and older patients without this personality trait [46] . plasminogen activator inhibitor-1 (pai-1) is a factor contributing to thrombosisrelated cardiovascular diseases in elderly people. both cytokines and hormones take part in the regulation of the gene expression of pai-1 (for review, see reference [49] ). in a model of premature immunosenescence, mice were assigned to either fast or slow group if the amount of time taken to explore the first arm of the maze was ≤20 s or >20 s, correspondingly [50] . when compared to fast mice, slow mice expressed high emotional response to stress and had lower life span [50] . at the immunological levels, slow mice showed a reduction in proliferative response to concanavalin a (con a) and related release of il-2 and il-1β and nk cell activity, while increasing the production of tnf-α [50] . an investigation on women who had to undergo breast biopsy indicated that this procedure should be considered as an emotional stressor if the final diagnosis is determined benign. in parallel with this emotional stress, the immune system prepares itself before the procedure and seeks for ways to prolong this preparation even 4 months after the procedure. this is a reflection of the joint regulation of our body by both the immune system and the emotional brain [51] . the immune system responds to this challenge by decreasing nk cell activity, decreasing production of ifn-γ, and increasing production of il-4, il-6, and il-10 [51] . further, there was a significantly positive relationship between mothers with breast cancer and their adult daughters on distress levels. this persuaded scientists to investigate the immune profile and its association with distress in daughters' group. daughters' distress levels were inversely associated with il-2, il-12, and ifn-γ production and also with il-2-induced natural cytotoxic activity (nca) [52, 53] . further, nca activity and the production of th1 cytokines were both negatively related to the emotional distress degree [53] . antoni and his colleagues accomplished a genome-wide transcriptional analysis on leukocyte samples taken from women subject to the treatment of stage 0-iii breast cancer and demonstrated that the negative affect, evaluated by the affects balance scale (abs), was significantly associated with greater than 50% increased expression of leukocyte transcripts such as proinflammatory marker-related genes [54] . another multiplex analysis on circulating concentration of 27 cytokines identified the il-6 profile as the predictor of physical and cognitive functioning and also the vascular endothelial growth factor (vegf) profile as the predictor of emotional functioning [55] . further, the experience of childhood emotional neglect/abuse was associated with lower levels of nca at the first evaluation after breast cancer surgery [56] . emotional processing and expression (evaluated by emotional approach coping scale), respectively, tended to be inversely and positively correlated with plasma levels of il-6, soluble tnf-receptor type 2 (stnf-rii), and crp in male patients with prostate cancer [57] . however, among those correlations, two of them, the correlations of emotional expression with il-6 and crp, were not found significant (p < 0.10) [57] . in vivo model of ultraviolet-b light-induced squamous cell carcinoma concluded that the high stress and anxiety levels can leave mice prone to the more considerably progression of the tumor through increasing the expression of immunosuppressive (ccl2 and t regulatory cells) and angiogenic (vegf: vascular endothelial growth factor) markers and decreasing the expression of antitumor immune markers (ctack/ccl27, il-12, and ifn-γ) [58] . on the other side, a peripheral tumor, by itself, could lead to a reduction in the hippocampal function, as reflected in increased depressive-like behaviors and memory impairment. this was, at least in part, underpinned by triggering an inflammatory process both in the hippocampus (↑il-1β, ↑il-6, ↑il-10, and ↑tnf-α) and in the circulation (↑il-6, ↑il-1β, and ↑il-10) [59, 60] . this process was found to be significantly strengthened in infection models compared to peripheral tumor models, explaining the presence and absence of the sickness state in these models, respectively [60] . using hospital anxiety and depression scale (hads) test, it was estimated that nearly half of patients with hemodialysis (hd) were in a depressive mood, which was significantly higher than what was reported for control group [61] . also, there was a higher production of il-6 in hd patients with anxiety (hads≥8) than those without anxiety (hads≤8) [61] . at least one major psychiatric illness, particularly depression, affects greater than 60% of hiv patients [62] . by virtue of the fact that specific substances of abuse aggregate further the situation of hiv patients at the neuropathological level [62] , the inverse correlation between affect regulation and regular substance motivates us to utilize er as a therapeutic intervention in this population [63] . hiv patients encounter commonly with situations where the social self is threatened. this threat causes shame feelings, which have been associated with increased proinflammatory cytokines [64] . both emotional and environmental factors affect the gut [65] . similar to that mentioned for asthma, this affect is mediated by crf and similar neuropeptides and also by mucosal mast cells [65] . immunoregulatory factors along with genetic and environmental factors contribute to the pathogenesis of inflammatory bowel disease (ibd). patients with ibd confront various er-related problems in their social life in a disease severity-dependent manner, such as higher sensitivity to negative emotions, fewer dropping into bar/disco and delayed falling in love, and experiencing more depressive and anxious symptoms, not only compared to controls, but also compared to patients with other chronic conditions [66] [67] [68] [69] [70] . neuroimaging studies have indicated a reduction in both the volume and the activation of brain regions related to emotional processing [71, 72] . the gray matter (gm) volume of both the frontal cortex and the anterior midcingulate cortex was reduced in patients with crohn's disease (a type of ibd) compared to controls. more interestingly, disease duration was found to correlate with the gm volumes of some brain regions, importantly, limbic areas [71] . also, patients with ulcerative colitis (another type of ibd) showed reduced activity, evaluated by bold signal, within the amygdala, thalamic regions, and cerebellar areas during the emotional visual task, compared to the control group [72] . for the first time in 1991, cohen and his colleagues demonstrated that higher psychological stress is associated with lower resistance to respiratory viruses (rhinovirus type 2, 9, or 14, respiratory syncytial virus, or coronavirus type 229e) in a dose-response manner [73] , while positive emotional style (pes), but not negative emotional style (nes), was found to correlate inversely with susceptibility to common cold and upper respiratory infections following exposure to rhinoviruses and influenza a virus in a dose-response manner [74, 75] . various regression analyses showed that this correlation is independent of prechallenge virus-specific antibody, virus type, age, sex, education, race, body mass, season, and nes, optimism, extraversion, mastery, self-esteem, purpose, and selfreported health [74, 75] . by contrast, childhood socioeconomic status, as assessed by "the number of childhood years during which their parents owned their home," was found to correspond negatively with both the risk of illness and infection and, in a word, with vulnerability to common colds [76] . this finding along with approximately the similar increased risk of common colds in "those whose parents did not own their home during their early life but did during adolescence" in "those whose parents never owned their home" [76] indicate that (a) the childhood period takes more impression of socioeconomic status of their family than other lifetime periods (e.g., adolescence) such that (b) it would influence the mind-body background of future life. meanwhile, pes and nes were negatively and positively related to the subjective report of unfounded symptoms of common cold, respectively [74, 75] . however, the basal protein levels of all the investigated proinflammatory cytokines, e.g., il-1β, il-6, and il-8, were associated with illness symptoms/ signs after exposure to rhinoviruses. however, il-6 was the best cytokine which could predict nasal symptoms/signs [77] . further, daily evaluation of emotional style and cytokine production in infected individuals on each one of 5 days after exposure to rhinoviruses and influenza virus showed that the production of inflammatory cytokines including il-6, il-1β, and tnf-α was negatively related to positive affect (pa) on that day or on the next day [78] . neurological diseases including parkinson's disease (pd) and alzheimer's disease (ad) are accompanied by serious shortfalls in emotional processing in a severity-dependent manner. for example, patients with frontotemporal dementia (ftd) represent a poor recognition of several basic emotions, e.g., anger, sadness, disgust, fear, and contempt. also, patients with the probable ad are more likely to fail to recognize fear and contempt compared with controls [79] . in patients with mild ad, the recognition of more basic emotions are missed, and they are less able to differentiate between some emotions, e.g., happiness and sadness [80] . apathy in patients with ad was found to correlate positively with dysfunction in the prefrontal and anterior temporal regions [81] . regarding memory recall, individuals with ad presented no preference to recall better emotional memories other than nonemotional ones, standing in stark contrast to healthy subjects, either young or older [82] . experimental models provided evidence that there are deficits in the emotional memory performance in ad, which can be diminished by treatment with cytotoxic necrotizing factor 1 (cnf1) [83] . this pleasant effect of cnf1 was accompanied by a reduced il-1β expression in the hippocampus, along with other encouraging events, especially enhancing the energy amount evaluated by the atp (adenosine triphosphate) levels [83] . there is a spectrum of behavioral problems in patients with ad [84] . agitation is the second most common behavior in ad, after apathy. it has been associated with cognitive impairment [84] . inflammatory changes appear to pave the way for agitation. there were higher il-1β levels and decreased nk cell activity in both the morning and evening periods corresponding with preagitation and agitation phases of ad [85] . even, esterling and his colleagues demonstrated changes in the immune profile of ad patients' spousal caregivers, either former or current. there was a reduced response of enriched nk cells to either ril-2 or rifn-γ cytokines in patients' caregivers than controls [86] . interestingly, this response was related positively to the emotional and tangible social support levels [86] . women with severe or morbid obesity had significantly increased levels of proinflammatory markers il-6 and hscrp in a bmi-dependent manner, which were closely related to anxiety and depression subscales of neuroticism, even after the bmi adjustment [87] . since these patients had to undergo gastric surgery, these markers were measured again after surgery. interestingly, decreased levels of il-6 and hscrp were correlated with lower anxiety and depressive behaviors postoperation [87] . the long-term maternal exposure (4 weeks before mating and during pregnancy and lactation) to a high-fat diet (hfd) led to a decrease in the basal serum levels of cort in offspring [88] . in addition, the steps toward normalizing the stress-induced cort levels were made with the lower speed in long-term hfd-exposed offspring than standard chow diet (sd)-exposed offspring at the end of stress challenge [88] . regarding inflammation-related markers, the increased expression of il-6 and il-1ra in long-term hfd-exposed offspring than chow-exposed ones in the amygdala was found in both females and males [88] , while changes in the expression of nf-kb and i-kappa-b-alpha (ikba) were observed only in female, but not in male, offspring [88] . mice subjected to short-term (1-3 weeks) hfd also exhibited anxiety-like behaviors in addition to learning and memory impairments and had significantly higher levels of homovanillic acid-a metabolite of dopamine-in their hippocampus and cortex but without any alteration in the gene expression of inflammatory markers [89] . further, chronic western diet (wd) intake led to the increased responsiveness to lps, which was represented in higher and more prolonged protein/mrna measures of il-6 in both plasma and hypothalamus, while there was no significant difference in the plasma levels of other proinflammatory cytokines such as tnf-α, il-1β, and ifn-γ between wd and sd groups [90] . in parallel with the increased expression of il-6, there was significantly increased mrna expression of socs-3, which belonged to the suppressors-of-cytokinesignaling (socs) family of proteins, in the hypothalamus in wd than sd mice [90] . however, the lps-induced mrna expressions of tnf-α and ifn-γ in the hippocampus were significantly higher in wd than sd mice. also, lps augmented the levels of adipokines, e.g., cst, leptin, and resistin, more significantly in wd mice when compared with mice exposed to sd [90] . altogether, both short-term and long-term obesity in either young adult or maternally can lead to display disturbed anxiety-like behaviors and impaired learning/memory, and brain inflammation might be one of the reasons behind these hfd-related events [88] [89] [90] [91] . chronologically, at first, learning/memory and then anxiety-like behaviors are impaired, and disturbance in depressive-like behaviors is subject to exposure to an immune challenge, such as lps [90] . a high age-adjusted prevalence rate of ~24% is estimated for metabolic syndrome (visceral obesity, dyslipidemia, hyperglycemia, and hypertension) in the united states [92] . both er-and edr-related subscales have been associated with the metabolic syndrome factor [93] . even, a disease pathway involving edr can be proposed, which is triggered by low socioeconomic status (ses), followed with low reserve capacity for high negative emotions and eventuated in the metabolic syndrome factor [94] . inflammation plays a major role in metabolic syndrome [95] . it has also been indicated that this role is not performed in the periphery merely, but a mice model of metabolic syndrome proved the presence of central inflammation (↑tnf-α, ↑il-1β, and ↑il-6) in the hippocampus, explaining the anxiety-like behavior in this model [96] . a possible pathogenic pathway for metabolic syndrome is initiated by emotional stress and ensuing enhancement in the levels of proinflammatory cytokines, e.g., il-1, il-6, and tnf-α. then, these cytokines lead to increased levels of ngf, which in turn stimulates a series of cascades toward insulin resistance and finally resulting in diabetes mellitus (for review, see [97] ). skin diseases are frequently associated with troubled er, reflecting in problematic emotional expression. for instance, patients with psoriasis, who are more likely to be alexithymic, employ more control over negative emotions and more avoidance of emotional closeness and intimacy compared with controls [98] . it may explain the negative relationship between psoriasis symptoms and affective expression in a severity-dependent manner [99] . when patients with atopic dermatitis were compared to healthy controls, the effect of psychological stress on the various immune parameters, such as ↑ eosinophil count, ↑ cd8 + /cd11 + and cla + t cells, and ↑ cytokines (il-5 and ifn-γ), was significantly strengthened [100] . the route from edr to immdr in acnegenesis has been explained elsewhere [101] . in summary, emotional distress would make sebocytes prone to the increased expression of receptors for crh, melanocortins, b-endorphin, vasoactive intestinal polypeptide, neuropeptide y, and calcitonin gene-related peptide, and then the production of proinflammatory cytokines are stimulated by means of these receptors and binding to their ligands. this notion that sleep disturbances are, irrespective of their cause, seen as chronic stressors is supported by evidence at different levels, e.g., immunological, neuropathological, and neuroimaging studies (for review, see [102] ). to assess the effects of sleep and its efficiency on susceptibility to respiratory infections, cohen and his colleagues conducted an investigation on healthy individuals and recorded at first their sleep duration and its efficiency within 14 consecutive days and then made them exposed to rhinoviruses, and after 5 days postchallenge, calculated the rate of clinical cold development [103] . this investigation indicated that those who had average sleep duration (asd) ≤7 hours were three times more susceptible to clinical cold than those with asd≥8 hours. further, there was a 5.5-fold increased risk of clinical cold in individuals with sleep efficiency <92% compared to those with an efficiency of ≥98%. therefore, it is well expected that circadian arrhythmia has been associated with edr-related parameters, e.g., decreased social motivation/functioning, decreased exploratory anxiety, and decreased emotional functioning [104, 105] . in a cancer population, the presence of circadian arrhythmia was associated with decreased levels of all the investigated cytokines, e.g., tnf-α, tgf-α, and il-6 [105] . alexithymic patients were more likely to suffer from severe forms of stroke when compared with non-alexithymic patients. this alexithymia trait appears to contain an inflammatory component either as a cause or an effect [106] . study of patients with a first-ever symptomatic ischemic stroke revealed that (a) the circulating levels of il-18 were correlated positively with the severity of alexithymia, (b) stratification of patients made this correlation more statistically significant in those who had right hemisphere lesions, and (c) these increased il-18 levels were pronounced in alexithymic (tas-20 score 61) than non-alexithymic patients [106] . in patients hospitalized for orthopedic injuries, the use of emotion-focused coping was found to correlate positively with the levels of proinflammatory cytokines, il-6 and il-8, whereas it was negatively correlated with tgf-β levels [107] . compared with controls who received placebo, circulating levels of cytokines, in particular, il-6, were increased at 3 hours after the first-ever typhoid vaccination. it coincided with significant mood impairment [108, 109] . it has been elucidated that when subjects perform psychological tasks (i.e., stress condition) after injection, the il-6 response is inversely related to optimism in either the typhim vi typhoid vaccine or saline placebo group [110] . in response to the implicit emotional face perception task, there was an increased activity in the subgenual anterior cingulate cortex (sacc) along with reduced functional connectivity between sacc and reduced activity within the anterior rostral medial prefrontal cortex (armpfc), mni coordinates, nucleus accumbens, right amygdala, sts, and ffas. these changes were observed in inflammation-associated mood change compared to the placebo group [108] . chapter 10 presented evidence supporting the notion that there are a variety of psychological states/psychiatric diseases where the immune responses, as well as the emotion regulation, are impaired. this chapter provided evidence linking physiological states/physical diseases to the impairment of both the immune system and emotion regulation. altogether, the immunoemotional regulatory system (immers) covers both psychological states/psychiatric diseases and physiological states/physical diseases. inevitably, such a system must comprise both the immune mediators and the neuroendocrine messengers, which will be discussed in the next chapter. asthma and emotion: a review the critical role of mast cells in allergy and inflammation contribution of stress to asthma worsening through mast cell activation emotion and pulmonary function in asthma: reactivity in the field and relationship with laboratory induction of emotion facial expressions of emotion and physiologic reactions in children with bronchial asthma cognitive and emotional influences in anterior cingulate cortex neural circuitry underlying the interaction between emotion and asthma symptom exacerbation acute stress affects cytokines and nitric oxide production by alveolar macrophages differently quality of life in allergic rhinitis and asthma: a population-based study of young adults pemphigus: etiology, pathogenesis, and inducing or triggering factors: facts and controversies stress as a trigger of autoimmune disease affective disorders in multiple sclerosis review and recommendations for clinical research the incidence and prevalence of neuropsychiatric syndromes in pediatric onset systemic lupus erythematosus analysis of cognitive and psychological deficits in systemic lupus erythematosus patients without overt central nervous system disease major life stress, coping styles, and social support in relation to psychological distress in patients with systemic lupus erythematosus disease activity and emotional state in multiple sclerosis emotional stress and coping in multiple sclerosis (ms) exacerbations differences in regional brain activation patterns assessed by functional magnetic resonance imaging in patients with systemic lupus erythematosus stratified by disease duration temporal covariation of soluble interleukin-2 receptor levels, daily stress, and disease activity in rheumatoid arthritis altered cognitiveemotional behavior in early experimental autoimmune encephalitis--cytokine and hormonal correlates neurobehavioral alterations in autoimmune mice reduced adult neurogenesis and altered emotional behaviors in autoimmune-prone b-cell activating factor transgenic mice emotional change-associated t cell mobilization at the early stage of a mouse model of multiple sclerosis disturbed emotionality in autoimmune mrl-lpr mice behaviour of mrl mice: an animal model of disturbed behaviour in systemic autoimmune disease reduced preference for sucrose in autoimmune mice: a possible role of interleukin-6 emotional triggering of cardiac events behavioral and emotional triggers of acute coronary syndromes: a systematic review and critique psychosocial factors and cardiovascular diseases mechanisms of acute myocardial infarction study (mamis) brain-heart connection and the risk of heart attack psychiatric and behavioral aspects of cardiovascular disease: epidemiology, mechanisms, and treatment type d personality and vulnerability to adverse outcomes in heart disease emotional triggers of acute coronary syndromes: strength of evidence, biological processes, and clinical implications comparison of circulating proinflammatory cytokines and soluble apoptosis mediators in patients with chronic heart failure with versus without symptoms of depression rethinking feelings: an fmri study of the cognitive regulation of emotion proinflammatory cytokines in heart failure: double-edged swords an inflammatory pathway links atherosclerotic cardiovascular disease risk to neural activity evoked by the cognitive regulation of emotion effect of negative emotions on frequency of coronary heart disease (the normative aging study) divergent associations of adaptive and maladaptive emotion regulation strategies with inflammation physiological responses to emotional excitement in healthy subjects and patients with coronary artery disease does type-d personality predict outcomes among patients with cardiovascular disease? a meta-analytic review type d personality among noncardiovascular patient populations: a systematic review type d personality, cardiac events, and impaired quality of life: a review usefulness of type d personality and kidney dysfunction as predictors of interpatient variability in inflammatory activation in chronic heart failure comparing type d personality and older age as correlates of tumor necrosis factor-alpha dysregulation in chronic heart failure type d personality is associated with increased levels of tumour necrosis factor (tnf)-alpha and tnf-alpha receptors in chronic heart failure cytokines and immune activation in systolic heart failure: the role of type d personality aging and plasminogen activator inhibitor-1 (pai-1) regulation: implication in the pathogenesis of thrombotic disorders in the elderly leukocyte function and life span in a murine model of premature immunosenescence psychologic stress, reduced nk cell activity, and cytokine dysregulation in women experiencing diagnostic breast biopsy mothers with breast cancer and their adult daughters: the relationship between mothers' reaction to breast cancer and their daughters' emotional and neuroimmune status increased emotional distress in daughters of breast cancer patients is associated with decreased natural cytotoxic activity, elevated levels of stress hormones and decreased secretion of th1 cytokines cognitivebehavioral stress management reverses anxiety-related leukocyte transcriptional dynamics relationship between circulating cytokine levels and physical or psychological functioning in patients with advanced cancer childhood adversity increases vulnerability for behavioral symptoms and immune dysregulation in women with breast cancer inflammatory biomarkers and emotional approach coping in men with prostate cancer high-anxious individuals show increased chronic stress burden, decreased protective immunity, and increased cancer progression in a mouse model of squamous cell carcinoma hippocampal dysfunctions in tumor-bearing mice peripheral tumors induce depressive-like behaviors and cytokine production and alter hypothalamic-pituitary-adrenal axis regulation emotional symptoms, quality of life and cytokine profile in hemodialysis patients neurobiology of hiv, psychiatric and substance abuse comorbidity research: workshop report affect regulation, stimulant use, and viral load among hiv-positive persons on antiretroviral therapy when the social self is threatened: shame, physiology, and health the effects of physical and psychological stress on the gastrointestinal tract: lessons from animal models psychosocial developmental trajectory of adolescents with inflammatory bowel disease a meta-analytic review of the psychosocial adjustment of youth with inflammatory bowel disease psychosocial symptoms and competence among adolescents with inflammatory bowel disease and their peers beyond standard quality of life measures: the subjective experiences of living with inflammatory bowel disease impact of inflammatory bowel disease and high-dose steroid exposure on pupillary responses to negative information in pediatric depression new insights into the brain involvement in patients with crohn's disease: a voxel-based morphometry study brain functional changes in patients with ulcerative colitis: a functional magnetic resonance imaging study on emotional processing psychological stress and susceptibility to the common cold emotional style and susceptibility to the common cold positive emotional style predicts resistance to illness after experimental exposure to rhinovirus or influenza a virus childhood socioeconomic status and host resistance to infectious illness in adulthood emotional style, nasal cytokines, and illness expression after experimental rhinovirus exposure infection-induced proinflammatory cytokines are associated with decreases in positive affect, but not increases in negative affect perception of emotion in frontotemporal dementia and alzheimer disease emotiondiscrimination deficits in mild alzheimer disease cerebral blood flow correlates of apathy in alzheimer disease effects of normal aging and alzheimer's disease on emotional memory cnf1 increases brain energy level, counteracts neuroinflammatory markers and rescues cognitive deficits in a murine model of alzheimer's disease the spectrum of behavioral changes in alzheimer's disease association between altered systemic inflammatory interleukin-1beta and natural killer cell activity and subsequently agitation in patients with alzheimer disease psychosocial modulation of cytokine-induced natural killer cell activity in older adults relationship between adiposity, emotional status and eating behaviour in obese women: role of inflammation perinatal high fat diet alters glucocorticoid signaling and anxiety behavior in adulthood methylphenidate prevents high-fat diet (hfd)-induced learning/memory impairment in juvenile mice diet-induced obesity progressively alters cognition, anxiety-like behavior and lipopolysaccharide-induced depressive-like behavior: focus on brain indoleamine 2,3-dioxygenase activation maternal high-fat diet in mice programs emotional behavior in adulthood prevalence of the metabolic syndrome among us adults: findings from the third national health and nutrition examination survey the associations of emotion regulation and dysregulation with the metabolic syndrome factor association between socioeconomic status and metabolic syndrome in women: testing the reserve capacity model inflammation and metabolic disorders cognitive and emotional alterations are related to hippocampal inflammation in a mouse model of metabolic syndrome metabolic syndrome--neurotrophic hypothesis control of negative emotions and its implication for illness perception among psoriasis and vitiligo patients the depression, anxiety, life satisfaction and affective expression levels in psoriasis patients der hautarzt; zeitschrift fur dermatologie, venerologie, und verwandte gebiete neuroendocrine regulation of sebocytes -a pathogenetic link between stress and acne sleep deprivation as a neurobiologic and physiologic stressor: allostasis and allostatic load sleep habits and susceptibility to the common cold circadian arrhythmia dysregulates emotional behaviors in aged siberian hamsters elevated serum cytokines correlated with altered behavior, serum cortisol rhythm, and dampened 24-hour rest-activity patterns in patients with metastatic colorectal cancer disease outcome, alexithymia and depression are differently associated with serum il-18 levels in acute stroke cytokine levels as potential biomarkers for predicting the development of posttraumatic stress symptoms in casualties of accidents inflammation causes mood changes through alterations in subgenual cingulate activity and mesolimbic connectivity neural origins of human sickness in interoceptive responses to inflammation dispositional optimism and stressinduced changes in immunity and negative mood key: cord-255575-640v00jg authors: binny, r. n.; baker, m. g.; hendy, s. c.; james, a.; lustig, a.; plank, m. j.; ridings, k. m.; steyn, n. title: early intervention is the key to success in covid-19 control date: 2020-10-23 journal: nan doi: 10.1101/2020.10.20.20216457 sha: doc_id: 255575 cord_uid: 640v00jg new zealand responded to the covid-19 pandemic with a combination of border restrictions and an alert level system that included strict stay-at-home orders. these interventions were successful in containing the outbreak and ultimately eliminating community transmission of covid-19. the timing of interventions is crucial to their success. delaying interventions for too long may both reduce their effectiveness and mean that they need to be maintained for a longer period of time. here, we use a stochastic branching process model of covid-19 transmission and control to simulate the epidemic trajectory in new zealand and the effect of its interventions during its covid-19 outbreak in march-april 2020. we use the model to calculate key outcomes, including the peak load on the contact tracing system, the total number of reported covid-19 cases and deaths, and the probability of elimination within a specified time frame. we investigate the sensitivity of these outcomes to variations in the timing of the interventions. we find that a delay to the introduction of alert level 4 controls results in considerably worse outcomes. changes in the timing of border measures have a smaller effect. we conclude that the rapid response in introducing stay-at-home orders was crucial in reducing the number of cases and deaths and increasing the probability of elimination. an outbreak of covid-19, a novel zoonotic disease caused by the sars-cov-2 virus, was first detected in wuhan, china in november 2019. the virus spread rapidly to other countries resulting in a pandemic being declared by the world health organisation in march 2020. governmental policy responses to covid-19 outbreaks have varied widely among countries, in terms of the nature and stringency of policy interventions, how quickly these interventions were implemented (table 1 ) (desvars-larrive et al., 2020) and their effectiveness at reducing spread of the virus (flaxman et al., 2020; hsiang et al., 2020; binny et al., 2020a) . while it is tempting to judge the success of interventions by comparison across jurisdictions, this assessment may be confounded by local context that may influence success, as well as by the fact that policy choices can be driven by the severity of initial outbreaks. models of disease spread played an important role in the design and timing of interventions, but they can also be used post hoc, to evaluate the effectiveness of those interventions. for example, flaxman et al. (2020) and brauner et al. (2020) fitted models of disease dynamics to case count and death data in different countries to estimate the effect of specific non-pharmaceutical interventions on the transmission rate of covid-19. in response to the escalating covid-19 pandemic and the outbreak that was establishing in new zealand in march 2020, a number of policy interventions were implemented to mitigate risk at the border and risk of community transmission. from 15 march 2020 (11.59pm), border restrictions were put in place requiring all international arrivals to 'self-isolate' (home quarantine) for 14 days. on 19 march 2020, the border was closed to everyone except returning citizens and residents. a system of four alert levels was introduced on 21 march with the alert level initially set at level 2. on 23 march, it was announced that the alert level was increasing to level 3 , and that the country would move to alert level 4 as of 11.59pm on 25 march, signalling that nz was taking a decisive covid-19 response that would become known as an elimination strategy (baker et al., 2020a) . at the time alert level 4 came into effect, there had been 315 reported (confirmed and probable) cases. alert level 4 stayed in place until 27 april when restrictions were eased to alert level 3. on 13 may, after 16 days at alert level 3, daily new cases had dropped to 3 and there was a phased easing into alert level 2 ( table 2) . the seven weeks spent under stringent alert level 3 or 4 restrictions, which included stay-at-home orders (see appendix table s2 for full list of measures) alongside systems for widespread testing, contact tracing and case isolation, were effective at reducing transmission (reff = 1.8 prior to alert level 4; reff = 0.35 during alert level 4; binny et al., 2020b) . daily numbers of new cases declined to between zero and one by mid may and the last case of covid-19 associated with the march outbreak was reported on 22 may. on 8 june, it was estimated that new zealand had very probably eliminated community transmission of covid-19 after 17 consecutive days with no new reported cases . between 22 may and 11 august, the only new cases detected were associated with international arrivals and during this period these arrivals were required to spend 14 days in government-managed isolation or quarantine facilities (baker et al, 2020b) . on 9 august 2020, new zealand reached a milestone of 100 days with no community transmission. for each hypothetical scenario, we simulate a model of covid-19 spread and compare key outcomes, including the peak load on the contact tracing system, the cumulative numbers of cases and deaths, and the probability of elimination predicted by the model. in particular, we assess how important new zealand's decision to move 'hard and early' was for the successful elimination of community transmission during the march-april outbreak. to this end, we compare scenarios with different timings until the start of alert level 4 to see how these choices could have affected the size of the outbreak. while alert level 4 was successful in achieving elimination, the benefits of elimination had to be weighed against the negative impacts of stringent stay-at-home measures, for example job losses, increased rates of domestic violence, disruption to education, and impacts on mental health. if careful border management could have avoided the need for a lockdown or reduced its intensity, this approach may have been preferable. for instance, taiwan's early border closure, travel restrictions and 14-day quarantine for those entering the country have meant that, to date, taiwan has avoided a mass lockdown (summers et al., 2020) . we explore whether introducing border restrictions earlier in new zealand might have been sufficient to eliminate or reduce transmission from international arrivals to the extent where stringent alert level 4 restrictions could have been avoided or less restrictive measures been sufficient. compared to other countries, new zealand was very quick to close its border to all except returning citizens and residents (table 1) . we explore a scenario where border closure is delayed by 5 days to assess how much larger the outbreak might have been had new zealand been slower to act. finally, we consider a scenario with no al4/3 restrictions to compare the size of outbreak that new zealand could have experienced if border restrictions and closure had been the only control measures. in this study, we focus on the timing of interventions; we do not explicitly consider the duration of interventions, although this will be investigated in future work. indeed, the likelihood of elimination was one of the factors taken into account in new zealand government decision-making concerning the duration of alert levels (dpmc, 2020). we simulated a stochastic model of covid-19 spread in new zealand under alternative scenarios in which implementation of border restrictions, border closure and alert level 4 were either delayed or started earlier. in each scenario, we kept the duration of each alert level the same as actually occurred, i.e. 33 days at alert level 4 followed by 16 days at alert level 3. we explored the following scenarios (see table 2 a full description of the model is provided in james et al. (2020a) . we obtained case data from esr, containing arrival dates, symptom onset dates, isolation dates and reporting dates for all international cases arriving in new zealand between february and june. border restrictions, border closure and start of al4 were all implemented at 11.59pm so we start simulating their effects on the day after their implementation date. for scenarios 0, 1, 2, 3 and 6, the model was seeded with the same number of international cases as were actually reported. in scenarios where border restrictions were implemented on the actual start date (15 march; scenarios 0, 1, 2, 4 and 6), the self-isolation dates of international cases were set to the same isolation dates as were actually reported. in all scenarios, prior to 9 april the modelled effect of self-isolation is to reduce an individual's infectiousness to 65% of their infectiousness when not isolated. this reflects some risk of onward transmission for cases self-isolating at home. after 9 april, the model assumes that all international cases are placed in government-managed isolation and quarantine (miq) facilities and do not contribute to local transmission. we also simulated a poisson-distributed random number of international subclinical cases in proportion to the number of international clinical cases (assuming 1/3 of all cases are subclinical), with arrival and symptom onset dates that were randomly sampled with replacement from the international case data. we assume that these international subclinical cases are not detected and therefore do not self-isolate, but those arriving after 9 april are placed in miq. to simulate border restrictions starting 5 days early (scenarios 3 and 5a), international cases arriving between the earlier start date and the actual start date (11 -15 march, inclusive) were assumed to be . cc-by-nc-nd 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; self-isolated on their date of arrival. to simulate a 5-day delay to border closure (scenarios 4 and 5b), we delayed the arrival dates (and associated symptom onset, reporting and isolation dates) of international seed cases arriving after 19 march by 5 days. we then allowed for new international cases arriving over these 5 days (e.g. additional non-residents that may have chosen to travel had the border remained open for longer) by seeding an additional poisson-distributed random number of international cases from 20 march to 24 march, with an average daily number of seeded cases equal to the actual average daily number of international cases arriving during the week prior to 19 march (33 international cases per day). these additional seeded cases were assumed to self-isolate on arrival and their delays from arrival-to-symptom onset and arrival-to-reporting were randomly sampled with replacement from the corresponding delays in the actual international case data. we did not attempt to simulate scenarios with delayed border restrictions or earlier border closure because these would have required additional modelling assumptions about isolation dates of international arrivals and about the reduction in the volume of international arrivals resulting from border closure. model predictions would have been highly sensitive to these assumptions and, without data available to validate them, this would introduce additional model uncertainty. for each scenario, we assessed the following key measures describing the dynamics of a covid-19 outbreak: 1. the maximum contact tracing load, by calculating the maximum number of daily new reported cases and the date on which this occurred. 2. the number of daily new reported cases at the end of alert level 4. 3. cumulative number of reported cases and the cumulative number of deaths at the end of the seven week period of alert level 3-4 restrictions. 4. probability of elimination, p(elim), 5 weeks after the end of al3. we performed 5000 realisations of the model and report the average value of each key measure as well as the interval range within which 90% of simulation results were contained (in square brackets throughout). here, we define elimination as there being no active cases (we assume a case remains 'active' for 30 days after date of exposure) that could contribute to future community transmission. this definition excludes cases in miq, that is it excludes international arrivals after 9 april 2020. in the model, p(elim) was calculated as the proportion of all model realisations that resulted in elimination. simulations were run using estimates of reproduction number reff that provided the best fit to actual data : for the period prior to lockdown reff = 1.8; during alert level 4 reff = 0.35. under the alert levels 3, 2 and 1, which followed the lockdown, the daily numbers of new cases were too low to obtain reliable estimates of the effective reproduction number reff. instead we simulated the model for assumed values of reff = 0.95, 1.7 and 2.4, respectively. reff = 2.4 is in line with estimates reported in plank et al (2020b) for the pre-lockdown period of new zealand's august-september outbreak, when al1 restrictions were in place. al2's reff would likely be lower than al1 but greater than one due to relatively high activity levels and contact rates as stay-at-home orders are lifted and public venues, businesses and schools re-open; reff = 1.7 is in the range of estimated values for the prelockdown period of the march-april outbreak given in plank et al (2020b) . the reff for al3 was chosen to be less than 1, however we tested the sensitivity of our results to using a value greater than one. for the scenario with no stringent alert level restrictions (scenario 6), we simulated the model using reff = 1.8 for the entire period (i.e. the same value as was used in all scenarios for the period prior to al4). . cc-by-nc-nd 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; we investigated the sensitivity of our results to varying the length of delay for the start of al4, to introducing border restrictions 10 days early (cf. 5 days early in scenario 3), and to other choices of reff under al3. to check that the model could accurately replicate the outbreak, we first simulated our model with border restrictions, border closure and al4 implemented on the dates they actually occurred. the predicted dynamics of daily new reported cases were a very good visual match to observed daily case data ( fig. 1) and predicted key measures showed good agreement with the values that were actually observed ( table 3 , bold text). after moving into al4, the model prediction and the actual number of daily new reported cases both levelled off at 70-80 for around one week before case numbers started to decline (fig. 1 ). in actual case data, the maximum of 84 new cases per day was observed at the start of this flat-topped peak, while our model predicted a similar maximum (80 [67, 99] new cases per day) occurring 6 days later. by the end of al3, the model predicted similar cumulative totals to the 1502 cases and 21 deaths actually reported. five weeks after al3 restrictions were relaxed, elimination of community transmission of covid-19 was achieved in 66% of model simulations, giving p(elim)=0.66 (table 3 ). in the following scenarios with alternative timings of interventions, we use scenario 0 as a baseline for comparing key measures. under a scenario where al4 was implemented 5 days earlier (only one day after border closure), the model predicts slightly lower values for most key measures than were actually observed: daily new cases peaked at a lower level of 69 [61, 79] cases around 26 march and at the end of al4 had dropped to a similar level of 4 new cases per day as was actually observed (fig. 1) . by the end of the 7 weeks of al4/3 it predicts approximately 500 fewer cases in total and 10 fewer deaths (table 3 ; scenario 1 cf. scenario 0). however, this estimate should be taken with caution because of the small numbers of daily cases and fine-scale variations involved: for instance, whether an outbreak occurred in an aged care facility or not. five weeks after al3, the probability of elimination was 63%, slightly lower than in scenario 0. this counter-intuitive result is due to the presence of an international case in the data that had an arrival date prior to the start of al4 (25 march) but a much later symptom onset date near the end of al4. in scenario 0, when international cases are seeded in the model, this individual's peak infectiousness occurs during al4. however, in scenario 1, the earlier start to al4 means that the individual is instead most infectious during al3. with a lower reff in al3, this individual infects more people, on average, in this scenario than in scenario 0. similarly, any simulated subclinical cases with the same arrival and symptom onset dates (drawn from the international case data) will also be most infectiousness during al3. these subclinicals do not appear in the numbers of reported cases but will reduce the probability of elimination. if this international case outlier is excluded from the data, the model predicts a very similar probability of elimination in both scenarios. delaying the move into alert level 4 would have led to a higher peak in daily new cases, and greater cumulative totals of cases and deaths. for a delay of 20 days (scenario 2c), the outbreak would have reached a considerably higher maximum of close to 500 daily new cases (cf. 80 cases in scenario 0; table 3 , fig. 1 and fig. 2 ). this number would certainly have overwhelmed the contact tracing system, which was already pushed close to capacity in places by the 70-80 daily new cases in late march is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; https://doi.org/10.1101/2020.10.20.20216457 doi: medrxiv preprint (verrall, 2020) . after a week in al4, case numbers would start to decline and by the end of the 4 weeks in al4 daily new cases would still have been as high as 34 [22, 49] (close to the actual number of domestic daily reported cases when new zealand went into al4 on 25 march). by the end of the 7 week period of stringent restrictions (i.e. end of al3) the incidence would have dropped to approximately 4 new cases per day (fig. 1) , but the cumulative total could have climbed to 11,534 [8854, 15048] reported cases and 200 [147, 266] deaths, substantially more than scenario 0 and the 1,502 cases and 21 deaths actually reported on 13 may. additionally, the probability of elimination 5 weeks after the end of al3 was only 7%, much lower than scenario 0. we next investigated a scenario where border restrictions were put in place 5 days earlier, but border closure and al4 were started on their actual dates. border restrictions would therefore have been in place for 9 days (cf. actual 4 days) before the border was closed. our model predicted this would have had very little impact for the initial trajectory ( fig. 1) or eventual size of the outbreak, with values for all key measures very similar to those in scenario 0 ( table 3 ). this finding suggests that key measures are more sensitive to varying the timing of al4 than to the timing of border restrictions. in reality, out of the 563 international cases who arrived prior to the start of miq and could have contributed to local transmission, only 78 (14%) arrived before border restrictions were implemented on 15 march and were not required to self-isolate. furthermore, out of these 78 cases, 52 arrived between 10 and 15 march and 19 of these were reported to have voluntarily self-isolated immediately on arrival (the model simulates these 19 cases as being self-isolated on arrival in all scenarios). therefore, under this scenario, only an additional 33 international cases have their infectiousness reduced by early self-isolation requirements. this reduction is not sufficient to prevent an outbreak, nor does it reduce transmission to an extent where al4/3 restrictions would not have been necessary to control the outbreak. under a scenario where closure of the border (to all except returning residents and citizens) was delayed by 5 days (24 march; 9 days after border restrictions and 1 day before al4), our model predicted slightly worse outcomes, on average, for key measures than were predicted in scenario 0. however, due to the stochasticity of individual simulations, the range of key measures always had overlap with the scenario 0 values and actual values, suggesting that a 5 day delay to border closure alone would not have made a significant difference. a delayed border closure did, however, have a greater impact for the probability of elimination 5 weeks after al3 restrictions were relaxed, which was only 55%, compared to 66% chance in scenario 0. this reduced probability of elimination is partly due to the additional international clinical cases (captured in the key measures of reported cases) and international subclinicals (not captured in reported cases) arriving prior to the delayed border closure. it is also likely affected by the international case outlier with the pre-miq arrival date and late onset date, discussed above. if border restrictions were implemented 5 days early and al4 came into effect 5 days early (scenario 5a), this would have led to outcomes very similar to those predicted in scenario 1 (where only al4 started early) ( table 3 ). this again suggests that results are more sensitive to changes in timing for the start of al4 than to an earlier start to border restrictions. in contrast, if border closure and the start of al4 had both been delayed by 5 days (scenario 5b), outcomes would have been worse than a delay in only one of these interventions ( is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; https://doi.org/10.1101/2020.10.20.20216457 doi: medrxiv preprint the end of the 7 week period in al4/3, there would have been close to 1,050 more cases in total and nearly 20 more deaths than in scenario 0 ( table 3 ). the probability of elimination 5 weeks after al3 would have also been reduced to 53%. finally, we explored the impact of only having border restrictions and border closure in place, but without implementing al4/3. under this scenario, the international cases who arrived prior to 9 april and were either in self-isolation or were not isolated have a chance of seeding an outbreak which, without al4/3 measures to reduce reff below one, leads to community transmission and a large uncontrolled outbreak. new zealand would have seen close to 1127 [841, 1492] new cases per day by 27 april, the date on which new zealand moved from al4 to al3 in reality. by 13 may (the date on which new zealand moved from al3 to al2), there could have been over 60,000 cumulative reported cases and over 1100 deaths. new cases would have continued to increase, reaching a peak of 47,592 [47, 240, 47, 962] daily new cases on 14 june (table 3 ). by the end of the outbreak, around october 2020 on average, there could have been over 1.81 million reported cases in total and 31,905 [31, 606, 32, 204] deaths. no simulations resulted in elimination by 18 june (5 weeks after end of actual al3), indicating a 0% chance of covid-19 having been eliminated by this time, compared to the 66% chance on this date in scenario 0. we assessed the effect that different lengths of delay (in days) until the start of al4 (fig. 2) had on key measures: maximum load on contact tracing system; cumulative total reported cases; total infected cases (including both clinical and subclinical); total deaths at end of al3; and probability of elimination 5 weeks after the end of al3. measures of numbers of cases and deaths increased exponentially with increasing delay to al4, emphasising the importance of acting quickly to reduce the risk of large outbreaks arising. probability of elimination decreased linearly with increasing delays to al4. counterintuitively, earlier starts to al4 slightly reduced the probability of elimination; again, this is caused by the international case outlier discussed previously. if the outlier is excluded from the international case data, the predicted probability of elimination is insensitive to al4 starting 1 to 5 days early. introducing border restrictions ten days earlier still results in an outbreak and gives very similar results to scenario 3 (5 days early), with a maximum of 77 [65, 94] new daily cases, 1385 [1166, 1706] cumulative reported cases at the end of al3, p(elim) = 0.68, and other measures the same as in scenario 3. with border restrictions ten days earlier, the only difference compared to scenario 3 is that an additional 16 cases who arrived between 5 and 10 march have their infectiousness reduced (a further 2 cases arriving in this 5-day period were voluntarily self-isolated on arrival in reality, so are simulated with self-isolation on arrival in all scenarios). this restriction has little impact on the overall contribution to local transmission by all 563 international cases who arrive prior to the start of miq. we also tested the sensitivity of all key measures to using different values of reff under al3 (table s1 ; reff = 1.1, 0.95 and 0.7). different choices of al3 reff had very little effect on predicted cumulative totals of cases at the end of al3 and no effect on total deaths at end of al3. however, the predicted probability of elimination was sensitive to varying al3 reff ; for all scenarios, assuming a lower reff = 0.7 (more effective al3) gave a p(elim) that was approximately 0.14 higher than with reff = 0.95, while a higher reff = 1.1 (less effective al3) reduced p(elim) by approximately 0.07. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; level 4 (i.e. end of alert level 3) , and the total number of deaths 7 weeks after the start of alert level 4. for each measure, except p(elim) , the mean value from 5000 simulations is reported alongside the interval range, in parentheses, in which 90% of simulations results are contained. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; https://doi.org/10. 1101 is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; https://doi.org/10.1101/2020.10.20.20216457 doi: medrxiv preprint new zealand's decision to act quickly and to implement stringent restrictions to reduce sars-cov-2 transmission meant that, to date, new zealand has experienced amongst the lowest mortality rates reported worldwide (kontis et al., 2020) . on 8 june 2020, nearly 11 weeks after al4 was initiated, new zealand declared elimination of covid-19. over the course of the march-april outbreak, a total of 1504 cases and 22 deaths were reported before elimination was achieved. our results suggest that the timing of alert level 4 is a much stronger driver of reductions in daily new cases than timings of border restrictions and closure. this finding makes sense because the effect of al4 in the model is to greatly reduce reff for all cases, domestic and international arrivals, to 0.35, while border restrictions reduce the delay until case isolation of international cases only (i.e. international cases have their infectiousness reduced earlier) and border closure reduces the daily numbers of international cases only. out of the scenarios we considered, an earlier start to al4 by 5 days resulted in the greatest reduction in numbers of cases and deaths, with approximately 500 fewer cases in total and 10 fewer deaths. however, in reality, the rapid escalation of the covid-19 situation in mid-march may have made an earlier start to al4 impractical and would have allowed less time to prepare for ongoing provision of essential services under al4. introducing border restrictions requiring 14-day self-isolation for international arrivals earlier than 15 march would have been unlikely to have much impact on the trajectory of new zealand's march-april outbreak, unless such measures were started prior to the first case on 26 february and used methods that were particularly effective (notably full miq). the 563 international cases arriving between 15 march and 9 april were already required to self-isolate; had border restrictions been in place prior to the arrival of new zealand's first case, this would have required self-isolation for, at most, an additional 56 international cases (22 cases who arrived prior to 15 march self-isolated voluntarily immediately on their arrival). in mid-march, there was a lower global prevalence of covid-19 and between 2 and 12 cases arrived at the border each day in the week prior to 15 march. with a higher global prevalence and correspondingly higher numbers of international cases arriving per day, earlier implementation of border restrictions may have had a greater impact than our model predicted for this outbreak. self-isolation is less stringent than miq and relies heavily on public compliance. without additional safety nets, such as official monitoring and support for people who are self-isolating, there is a greater risk of the virus spreading into the community than in miq facilities. for example, risk of non-compliance may be higher for individuals who are concerned about loss of income (bodas & peleg, 2020) . without alert level restrictions in place to require strong communitywide social distancing, any infected individuals who do not self-isolate effectively are more likely to spark an outbreak. self-isolation restrictions for international arrivals can therefore reduce the frequency of cases leaking into the community but are unlikely to be sufficient to prevent an outbreak entirely, unless additional measures are also put in place. delaying border closure by 5 days could have led to a slightly larger outbreak, but not as large as if al4 had been delayed by 5 days. the full effect on local transmission potential of the additional international cases expected under a delayed border closure was partially dampened because international cases arriving after 9 april were still placed in miq and assumed not to contribute to community transmission. if the timing of this miq policy was also delayed, a larger outbreak may have occurred, but we did not model such a scenario here. if the start of al4 had been delayed by 20 days, our results suggest new zealand could have experienced over 11,500 reported cases and 200 deaths, reducing the chance of elimination to only 7%. as with other severe viral disease, the infection fatality risk for covid-19 is greater for mä�ori and pacific peoples (close to 50% higher for mä�ori than for non-mä�ori) wilson et al., 2012; verrall et al., 2010) . therefore, in scenarios resulting in significantly higher numbers of covid-19-related deaths (e.g. scenario 2c), mä�ori and pacific communities would likely have been disproportionately affected, however a population-structured model would be required to assess this consequence in detail. delaying al4 would have also increased the chance of a longer lockdown period being required to reduce daily new case numbers to low levels. with a 20 day delay to al4, new zealand could still have been experiencing close to 35 new reported cases per day at the end of al4. while in reality, a 33-day period in al4 was sufficient to reduce daily new cases to below 10, and the government announced an easing to al3, these higher case numbers predicted for a delayed start to al4 may have motivated an extension to the lockdown to allow more time for cases to drop below a safe threshold. in terms of the key measures we considered, the counterfactual scenario with no al4/3 restrictions (scenario 6) had disastrous outcomes, including close to 2 million reported cases and tens of thousands of deaths. our model uses a value of reff =0.35 during al4, which was estimated by binny et al (2020b) by fitting the model to data, and is consistent with a later estimate of reff from reconstructions of the epidemiological tree . this is a relatively low value of reff compared to other countries who implemented interventions roughly equivalent to al4 (flaxman et al, 2020; binny et al, 2020a) . a combination of a highly effective social distancing in al4, fast contact tracing, effective case isolation, and the fact that the outbreak occurred at the end of the southern hemisphere summer, likely contributed to this low reff . for scenarios where the load on contact tracing exceeded system capacity (e.g. scenario 2c with a maximum 500 daily new cases), this effect would have likely resulted in longer delays to isolation of cases and a higher reff. we did not attempt to model this potential feedback effect and so our results for scenarios where contact tracing system capacity is exceeded may underestimate the size of the outbreak. we also did not attempt to directly model the burden of covid-19 on the healthcare system (e.g. numbers of cases requiring hospitalisation or intensive care), or the effects of an overwhelmed healthcare system. once numbers of daily new cases requiring hospitalisation or icu admission exceed new zealand's healthcare system capacity, this could result in increased fatality rates and considerably more deaths . these effects would have been most pronounced under the scenario with no al4/3 restrictions. it is important to note that, while we report average values for outbreak dynamics, each individual realisation of the stochastic model can deviate (sometimes widely) from the average behaviour. when case numbers are small, as they were in new zealand, the predicted dynamics are particularly sensitive to fine-scale variations. while reff < 1 means that an outbreak will eventually die out, on average, it is still possible for a small number of cases to spark an outbreak in a particular stochastic realisation if interventions are relaxed too soon. conversely, when case numbers are small, an outbreak can still die out by chance even when reff > 1. it is therefore important to account for this stochasticity when weighing the effectiveness and risks of different intervention strategies, for example by considering the probability of elimination. on 18 june, five weeks after al3 restrictions were relaxed, the probability that community transmission of covid-19 had been eliminated in model simulations was estimated to be 66% in scenario 0. this estimate is calculated by finding the proportion of stochastic realisations that resulted in elimination. in reality, as the outbreak died out and more days with zero new cases were observed, this provided additional information about which trajectory new zealand was most likely experiencing. each additional consecutive day with no new reported cases reduced the likelihood of . cc-by-nc-nd 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; being on an upward trajectory. making use of this information meant that the actual probability of elimination on 18 june was estimated to be 95% higher than in scenario 0 . however, this estimate required up-to-date information about recent case numbers. the results reported in this paper compare average outcomes under different scenarios, which is appropriate for evaluating the effect of alternative actions and guiding future decision making. in general, for the other scenarios we explored, bringing in earlier interventions had very little impact on probability of elimination, while delaying border closure or al4 reduced the chance of elimination. our results are important for reflecting on the effectiveness of intervention timing in new zealand's covid-19 response, relative to alternative scenarios, to help guide future response strategies. early intervention was critical to the successful control of new zealand's march-april outbreak. for modelling future disease outbreaks, epidemiological parameters should be updated to reflect changes in national pandemic preparedness (e.g. improved policy and response plans) and behavioural changes influencing the dynamics of future outbreaks. for instance, the degree of compliance with alert level restrictions in future may differ dramatically from the march-april outbreak, resulting in different values of reff. further work is needed to explore the social dynamics affecting transmission and the effectiveness of interventions, for instance whether wearing masks in public spaces becomes more common, or whether more people will choose to work from home or avoid travel if a suspected new outbreak is reported. the key measures of outbreak dynamics assessed here should be considered alongside other measures of economic, social and health impacts (e.g. job losses, consumer spending, impacts for mental health, rates of domestic violence or disrupted education). particular attention needs to be given to identifying vulnerable groups who may experience inequitable impacts so that future policies can be tailored to support these groups. at the end of al3, health benefits (e.g. number of cases and deaths avoided) differed between scenarios. for cost-benefit analyses, age-dependent morbidity and mortality rates of covid-19 (kang & jung, 2020) allow numbers of cases and deaths to be quantified in terms of disability-adjusted life years (dalys) avoided (or quality-adjusted life years gained), which can (with obvious issues) be converted to monetary units to facilitate comparison with economic costs. because the duration spent under al4/al3 was fixed at 7 weeks for scenarios 0-5, the short-term economic costs of the different scenarios would have been similar, so we did not convert health benefits into dalys here. after the end of al3, benefits and costs would differ between scenarios depending on the value of reff for al1-2 and on whether or not elimination was achieved. increased levels of activity and contact rates under al1-2 mean that reff is very likely to have been greater than one. in scenarios with lower probabilities of elimination, it is more likely that new zealand would continue to experience new cases while under al1-2 which, with reff > 1, would likely lead to another outbreak and require a second lockdown (with its associated costs). conversely, scenarios with higher probabilities of elimination mean there is a greater chance of the outbreak dying out entirely and less risk of a second lockdown being required. future work could consider the costs and benefits of alternative scenarios where the duration of time spent in al4 and al3 is dictated by the need to achieve a certain threshold probability of elimination. . cc-by-nc-nd 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 23, 2020. ; https://doi.org/10. 1101 new zealand's elimination strategy for the covid-19 pandemic and what is required to make it work successful elimination of covid-19 transmission in new zealand effect of alert level 4 on effective reproduction number: review of international covid-19 cases effective reproduction number for covid-19 in aotearoa new zealand. medrxiv preprint probability of elimination for covid-19 in aotearoa new zealand self-isolation compliance in the covid-19 era influenced by compensation: findings from a recent survey in israel the effectiveness of eight nonpharmaceutical interventions against covid-19 in 41 countries cab-20-sub-0270 review of covid-19 alert level 2 cccsl: complexity science hub covid-19 version 2.0 estimating the effects of non-pharmaceutical interventions on covid-19 in europe the effect of large-scale anti-contagion policies on the coronavirus (covid-19) pandemic. medrxiv preprint a structured model for covid-19 spread: modelling age and healthcare inequities successful contact tracing systems for covid-19 rely on effective quarantine and isolation model-free estimation of covid-19 transmission dynamics from a complete outbreak age-related morbidity and mortality among patients with covid-19 magnitude, demographics and dynamics of the effect of the first wave of the covid-19 pandemic on all-cause mortality in 21 industrialized countries a stochastic model for covid-19 spread and the effects of alert level 4 in aotearoa new zealand. medrxiv preprint effective reproduction number and likelihood of cases outside auckland estimated inequities in covid-19 infection fatality rates by ethnicity for aotearoa new zealand. medrxiv preprint potential lessons from the taiwan and new zealand health responses to the covid-19 pandemic. the lancet regional health -western pacific rapid audit of contact tracing for covid-19 in new zealand. ministry of health, new zealand. 0.95 80 evaluated at end of outbreak, approx weeks after end of actual al3) the authors acknowledge the support of statsnz, esr, and the ministry of health in supplying data in support of this work. we are grateful to samik datta, nigel french, markus luczak-roesch, melissa mcleod, anja mizdrak, fraser morgan and matt parry for comments on an earlier version of this manuscript. this work was funded by the ministry of business, innovation and employment and te på«naha matatini, new zealand's centre of research excellence in complex systems. table s1 : key measures from alternative scenarios of early or delayed implementation of policy interventions: the maximum number of daily new cases, date on which the peak occurs, the number of daily new cases at the end of the alert level 4 period, the cumulative number of cases 7 weeks after the start of alert level 4, and the total number of deaths 7 weeks after the start of alert level 4. for each measure, except p(elim) , the mean value from 5000 simulations is reported alongside the interval range, in parentheses, in which 90% of simulations results are contained. â�¢ stay at home except for essential personal movement â�¢ safe recreational activity allowed in local area.â�¢ travel is severely limited.â�¢ all gatherings cancelled and public venues closed.â�¢ businesses closed, except essential services and lifeline utilities.â�¢ educational facilities closed.â�¢ rationing of supplies and requisitioning of facilities possible.â�¢ reprioritisation of healthcare services. â�¢ stay at home except for essential personal movement -including going to work, school or for local recreation.â�¢ work from home if possible.â�¢ low risk local recreation activities allowed.â�¢ inter-regional travel highly limited (e.g. allowed for essential workers, with limited exemptions for others).â�¢ public venues closed (e.g. libraries, museums, cinemas, food courts, gyms, pools, playgrounds, markets).â�¢ gatherings of up to 10 people allowed but only for wedding services, funerals and tangihanga, with physical distancing and public health measures maintained. â�¢ two-metre physical distancing outside home. one metre in controlled environments, e.g. schools and workplaces.â�¢ people must stay within their immediate household bubble, but can expand this to reconnet with close family/whä�nau, or bring in caregivers, or support isolated people. this extended bubble should remain exclusive.â�¢ businesses can open premises, but cannot physically interact with customers.â�¢ schools (years 1 to 10) and early childhood education centres can safely open but with limited capacity. children should learn from home if possible. â�¢ healthcare services use virtual, non-contact consultations where possible.â�¢ people at high risk of severe illness (older people or those with pre-existing medical conditions) are encouraged to stay at home and take extra precautions when leaving home. they may choose to work. key: cord-121777-3zrnz9nc authors: qian, xuelin; fu, huazhu; shi, weiya; chen, tao; fu, yanwei; shan, fei; xue, xiangyang title: m3lung-sys: a deep learning system for multi-class lung pneumonia screening from ct imaging date: 2020-10-07 journal: nan doi: nan sha: doc_id: 121777 cord_uid: 3zrnz9nc to counter the outbreak of covid-19, the accurate diagnosis of suspected cases plays a crucial role in timely quarantine, medical treatment, and preventing the spread of the pandemic. considering the limited training cases and resources (e.g, time and budget), we propose a multi-task multi-slice deep learning system (m3lung-sys) for multi-class lung pneumonia screening from ct imaging, which only consists of two 2d cnn networks, i.e., sliceand patient-level classification networks. the former aims to seek the feature representations from abundant ct slices instead of limited ct volumes, and for the overall pneumonia screening, the latter one could recover the temporal information by feature refinement and aggregation between different slices. in addition to distinguish covid-19 from healthy, h1n1, and cap cases, our m 3 lung-sys also be able to locate the areas of relevant lesions, without any pixel-level annotation. to further demonstrate the effectiveness of our model, we conduct extensive experiments on a chest ct imaging dataset with a total of 734 patients (251 healthy people, 245 covid-19 patients, 105 h1n1 patients, and 133 cap patients). the quantitative results with plenty of metrics indicate the superiority of our proposed model on both sliceand patient-level classification tasks. more importantly, the generated lesion location maps make our system interpretable and more valuable to clinicians. coronavirus disease 2019 , caused by a novel coronavirus (sars-cov-2, previously known as 2019-ncov), is highly contagious and has become increasingly prevalent worldwide. the disease may lead to acute respiratory distress or multiple organ failure in severe cases [1] , [2] . as of june 28th, 2020, 495, 760 of 9, 843, 073 confirmed cases across countries have led to death, according to who statistics. thus, how to accurately and efficiently diagnose covid-19 is of vital importance not only for the timely treatment of patients, but also for the distribution and management of hospital resources during the outbreak. the standard diagnostic method being used is real-time polymerase chain reaction (rt-pcr), which detects viral nucleotides from specimens obtained by oropharyngeal swab, nasopharyngeal swab, bronchoalveolar lavage, or tracheal aspirate [3] . early reports of rt-pcr sensitivity vary considerably, ranging from 42% to 71%, and an initially negative rt-pcr result may convert into covid-19 after up to four days [4] . recent studies have shown that typical computed tomography (ct) findings of covid-19 include bilateral pulmonary parenchymal groundglass and consolidative pulmonary opacities, with a peripheral lung distribution [5] , [6] . in contrast to rt-pcr, chest ct scans have demonstrated about 56∼98% sensitivity in detecting covid-19 at initial manifestation and can be helpful in rectifying false negatives obtained from rt-pcr during early stages of disease development [7] , [8] . however, ct scans also share several similar visual manifestations between covid-19 and other types of pneumonia, thus making it difficult and time-consuming for doctors to differentiate among a mass of cases, resulting in about 25∼53% specificity [4] , [7] . among them, cap (community-acquired pneumonia) and influenza pneumonia are the most common types of pneumonia, as shown in figure 1 ; therefore, it is essential to differentiate covid-19 pneumonia from these. recently, liu et al. compared the chest ct characteristics of covid-19 pneumonia with influenza pneumonia, and found that covid-19 pneumonia was more likely to have a peripheral distribution, with the absence of nodules and tree-in-bud signs [9] . lobar or segmental consolidation with or without cavitation is common in cap [10] . although it is easy to identify these typical lesions, the ct features of covid-19, h1n1 and cap pneumonia are very diverse. in the past few decades, artificial intelligence using deep learning (dl) technology has achieved remarkable progress in various computer vision tasks [11] [15] . recently, the superiority of dl has made it widely favored in medical image analysis. specifically, several studies focus on classifying different diseases, such as autism spectrum disorder [16] , [17] or alzheimer's disease in the brain [18] [20] ; breast cancers [21] [23] ; diabetic retinopathy and glaucoma in the eye [24] [26] ; and lung cancer [27] , [28] or pneumonia [29] , [30] in the chest. some efforts have also been made to partition images, from different modalities (e.g., ct, x-ray, mri) into different meaningful segments [31] [33] , including pathology, organs or other biological structures. existing studies [30] , [34] , [35] have demonstrated the promising performance of applying deep learning technology for covid-19 diagnosis. however, as initial studies, several limitations have emerged from these works. first of all, [36] [39] utilized pixel-wise annotations for segmentation, which require taxing manual labeling. this is unrealistic in practice, especially in the event of an infectious disease pandemic. second, performing diagnosis or risk assessment on only slice-level ct images [34] , [40] [45] is of limited value to clinicians. since a volumetric ct exam normally includes hundreds of slices, it is still inconvenient for clinicians to go through the predicted result of each slice one by one. although, 3d convolutional neural networks (3d cnns) are one option for tackling these limitations, their high hardware requirements, computational costs (e.g., gpus) and training time, make them inflexible for applications [43] , [46] , [47] . to this end, we propose a multi-task multi-slice deep learning system (m 3 lung-sys) for multi-class lung pneumonia screening, which can jointly diagnose and locate covid-19 from chest ct images. using the only category labeled information, our system can successfully distinguish covid-19 from h1n1, cap and healthy cases, and automatically locate relevant lesions on ct images (e.g., ggo) for better interpretability, which is more important for assisting clinicians in practice. to facilitate the above objective, two networks using a 2d cnn are devised in our system. the first one is a slice-level classification network, which acts like a radiologist to diagnose from coarse (normal or abnormal) to fine (disease categories) for every single ct slice. as the name suggests, it can ignore the temporal information among ct volumes and focus on the spatial information among pixels in each slice. meanwhile, the learned spatial features can be further leveraged to locate the abnormalities without any annotation. to recover the temporal information and provide more value to clinicians, we introduce a novel patient-level classification network, using specifically designed refinement and aggregation modules, for diagnosis from ct volumes. taking advantage of the learned spatial features, the patient-level classification network can be trained easily and efficiently. in summary, the contributions of this paper are four-fold: 1) we propose an m 3 lung-sys for multi-class lung pneumonia screening from ct images. specifically, it can distinguish covid-19 from healthy, h1n1 and cap cases on either a single ct slice or ct volumes of patients. 2) in addition to predicting the probability of pneumonia assessment, our m 3 lung-sys is able to simultaneously output the lesion localization maps for each ct slice, which is valuable to clinicians for diagnosis, allowing them to understand why our system gives a particular prediction, rather than simply being fed a statistic. 3) compared with 3d cnn based approaches [47] , [48] , our proposed system can achieve competitive performance with a cheaper cost and without requiring large-scale training data 1 . 4) extensive experiments are conducted on a multi-class pneumonia dataset with 251 healthy people, 245 covid-19 patients, 105 h1n1 patients and 133 cap patients. we achieve high accuracy of 95.21% for correctly screening the multiclass pneumonia testing cases. the quantitative and qualitative results demonstrate that our system has great potential to be applied in clinical application. the ethics committee of shanghai public health clinical center, fudan university approved the protocol of this study and waived the requirement for patient-informed consent (yj-2020-s035-01). a search through the medical records in our hospital information system was conducted, with the final dataset consisting of 245 patients with covid-19 pneumonia, 105 patients with h1n1 pneumonia, 133 patients with cap and 251 healthy subjects with non-pneumonia. of the 734 enrolled people with 415 (56.5%) men, the mean age was 41.8±15.9 years (range, 2 ∼ 96 years). the patient demographic statistics are summarized in table i . the available ct scans were directly downloaded from the hospital picture archiving and communications systems (pacs) and non-chest cts were excluded. consequently, 734 threedimensional (3d) volumetric chest ct exams are acquired for our algorithm study. all the covid-19 cases (mean age, 51.5 ± 15.9 years; range, 16 ∼ 83 years) and h1n1 cases (mean age, 28.5 ± 14.6 years; range, 4 ∼ 78 years) were acquired from january 20 to february 24, 2020 and from may 24, 2009 to january 18, 2010, respectively. all patients were diagnosed according to the diagnostic criteria of the national health commission of china and confirmed by rt-pcr detection of viral nucleic acids. patients with normal ct imaging were excluded. the patients with cap subjects (mean age, 48.5 ± 17.4 years; range, 8 ∼ 96 years) and healthy subjects (mean age, 32.4 ± 11.8 years; range, 2 ∼ 73 years) with non-pneumonia were randomly selected between january 3, 2019 and january 30, 2020. all the cap cases were confirmed positive by bacterial culture, and healthy subjects with non-pneumonia undergoing physical examination had normal ct imaging. to better improve the algorithm framework and fairly demonstrate the performance, we do not use any ct volumes from re-examination, that is, only one 3d volumetric ct exam per patient is enrolled in our dataset. as shown in figure 2 , all eligible patients were then randomized into a training set and testing set, respectively, using random computer-generated numbers. unlike other studies [45] , [47] which employ a small number of cases (10%∼15%) for testing, we utilize around 40% of each category to evaluate the effectiveness and practicability of our system. the annotation was performed at a patient and slice level. first of all, each ct volume was automatically labeled with a one-hot category vector based on ct reports and clinical diagnosis (i.e., 0: healthy; 1: covid-19; 2: h1n1; 3: cap). considering that each volumetric exam contains 512 × 512 images with a varying number of slices from 24∼495, for training, five experts subsequently annotated each ct slice following four principles: (1) the quality of annotation is supervised by a senior clinician; (2) if a slice is determined to have any lesion, label it with the corresponding ct volume's category; (3) except for healthy cases, all slices from other cases considered as normal are discarded; (4) all slices from healthy people are annotated as healthy. note that we evaluate our model with the whole ct volume (i.e., realistic and arbitrary-length data), the discarded slices are only removed for training. eventually, the number of slices annotated for the four categories is listed in table i figure 3 shows the schematic of our proposed multitask multi-slice deep learning system (m 3 lung-sys), which consists of two components, the image preprocessing and classification network. specifically, the image preprocessing receives raw ct exams, and prepare them for model training or inference (in section iii-a). for the classification network, we divide it into two subnets (stages), i.e., slice-level and patient-level classification networks, with the purpose of jointly covid-19 screening and location. concretely, slicelevel classification network is trained with multiple ct slice images and predicts the corresponding slice-level categories (in section iii-b), i.e., healthy, covid-19, h1n1 or cap. besides, patient-level classification network only has four layers, and takes a volume of ct slice features, instead of images as input, which are extracted by the former network, to output the patient-level labels (in section iii-c). both classification networks are trained separately due to different tasks, but can be used concurrently in an end-to-end manner for the efficiency. more importantly, for cases classified as positive (i.e., covid-19, h1n1 or cap), our system can locate suspected area of abnormality without any pixel-level annotations (in section iii-b). the pixel value of ct images reflects the absorption rate of different human tissues (e.g., bone, lung, kidney) to xrays, which is measured by hounsfield unit (hu) [49] . if we directly apply raw images for classification, this will inevitably introduce noise or irrelevant information, such as the characteristics of equipment, making the performance of the model inaccurate and unreliable. consequently, according to the priors from radiologist, here, we introduce two effective yet straightforward approaches for preprocessing. 1) lung crop: given chest ct images, lungs are one of the most important organs observed by radiologists to check whether there exist abnormalities. considering the extreme cost and time-consumption of manual labeling, instead of training an extra deep learning network for lung segmentation [47] , [48] , [50] , we propose a hand-crafted algorithm to automatically subdivide/segment the image into 'lungs' and 'other', and then crop the area of lungs using the minimum bounding rectangle within a given margin. as illustrated in figure 4 , the details of the algorithm involved in lung segmentation and cropping are as following: • step 1: load the raw ct scan image (figure 4 (a) ). • step 2: set a threshold to separate the lung area from others, such as bone and fat ( figure 4 (b)). in this paper, we set the threshold of hu as t hu = −300. • step 3: to alleviate the effect of 'trays', which the patient lays on during ct scanning, we apply a morphological opening operation [51] (figure 4 (c)). specifically, we set the kernel size as 1 × 8. • step 4: remove the background (e.g., trays, bone and fat) based on 8-connected components labeling [52] ( figure 4 (d)). • step 5: apply the morphological opening operation again to eliminate the noise caused by step 3 (figure 4 (e)). • step 6: compute the minimum bounding rectangle with a margin of 10 pixels for lung cropping and then resize the cropped image ( figure 4 (f)). 2) multi-value window-leveling: in order to simulate the process of window-leveling when a radiologist is looking at ct scans, we further apply multi-value window-leveling to all images. more concretely, the value of the window center is assigned randomly from −700 to −500, and the window width is assigned with a constant of 1200. this preprocessing provides at least two benefits: (1) generating much more ct samples for training, i.e., data augmentation; (2) during inference, the assessment based on multi-value window-leveling ct images will be more accurate and reliable. after the above-mentioned image preprocessing, slices from ct volumes are first fed into the slice-level classification network. considering the outstanding performance achieved by the residual networks (resnets) [53] on the 1000-category image classification task, we utilize resnet-50 [53] as our backbone and initialize it with the imagenet [54] pretrained weights. this network consists of four blocks (a.k.a, resblock1∼4) with a total of 50 layers, including convolutional layers and fully connected layers. each block has a similar structure, but different number of layers. the skip connection and identity mapping functions in the blocks make it more possible to apply deeper layers to learn stronger representations. for the purpose of pneumonia classification and alleviating the limitations discussed in section i, we improve the network from three aspects, i.e., multi-task learning for radiologist-like diagnosis, weakly-supervised learning for slice-level lesion localization (attention) and coordinate maps for learning location information, as shown in figure 5 . 1) multi-task learning: usually, given a ct slice, a radiologist will gradually check for abnormalities and make a decision according to these. to act like an experienced radiologist, we introduce a multi-task learning scheme [55] by dividing the network into two stages. specifically, image features obtained from the first three resblocks are fed into an extra classifier to determine whether they have any lesion characteristics. then, the features are further passed through resblock4 to determine fine-grained category, i.e., healthy, covid-19, h1n1 or cap. 2) weakly-supervised learning for lesion localization: instead of using pixel-wise annotations or bounding box labels for learning to locate infection areas, we devise a weaklysupervised learning approach, that is, employ only the category labels. specifically, the weights of the extra classifier described in sec. iii-b.1 have a dimension of 2 × d, where d is the dimension of the feature and '2' denotes the number of classes (i.e., 'with lesion' and 'without lesion'). these learned weights can be regarded as two prototypical features of the corresponding two classes. similar to the class activation the details of our proposed slice-level classification network. we improve the network from three aspects: (1) multi-task learning to diagnose like a radiologist; (2) weakly-supervised learning for slice-level lesion localization (attention); (3) coordinate maps for location information. the symbol '©' indicates a concatenation operation. map [56] , we first select one prototypical feature according to the predicted class, and then calculate the distance between it and each point of the image feature extracted from the first three resblocks. intuitively, a closer distance between a point and the prototypical feature of 'with lesion' indicates that the area of this point mapping to the input ct slice has a higher probability of being an infection region, e.g., ggo. as one output of our m 3 lung-sys, such generated location maps are complementary to the final predicted diagnosis and provide interpretability for our network, making the assistance to clinicians more comprehensive and flexible. more visualization samples are demonstrated in figure 10 , 11 and 12. furthermore, we regard the lesion location map as an attention map and take full advantage of it to help the slicelevel differential diagnosis, as shown in figure 5 . 3) coordinate maps: from the literature [57] , it is known that infections of covid-19 have several spatial characteristics. for example, they frequently distribute bilaterally on lungs, and predominantly in peripheral lower zone. nevertheless, convolutional neural networks primarily extract the features of textures. to explicitly capture the spatial information, and inspired by [58] , we integrate our slice-level classification network with the coordinate maps (h × w × 3) containing three channels, where h and w are the height and width of the image feature extracted from the first three resblocks, to facilitate the distinction among covid-19, h1n1 and cap. the first two channels of the coordinate maps are instantiated and filled with the coordinates of x ∈ [0, w ) and y ∈ [0, h) respectively. and we further apply a normalization to make them fall in the range of [−1, 1]. the last channel encodes the distance d from the point (x, y) to the center (0, 0), i.e, d = x 2 + y 2 . specifically, these three additional channels are fed into the resblock4 together with the image feature and attention map to learn representations with spatial information. as mentioned in section i, performing diagnosis or risk assessment on only slice-level ct images is of limited value to clinicians. although several studies [47] , [48] have been proposed to take advantage of temporal information with 3d cnns for patient-level differential diagnosis, they require thousands of patient-level data for deep model training, which makes the cost particularly high. to overcome these limitations, we further propose a patient-level classification network. it takes a volume of ct slice-level features as input rather than 3d images, and only comprises four layers, allowing it to be trained with lower hardware, time and data cost. details will be described below. note that we concatenate the features from resblock3 and resblock4 in section iii-b as the input. 1) feature refinement and aggregation head.: inspired by [12] , [59] , we introduce a three-layer head to conduct feature refinement and aggregation, so that the image features from different slices can be correlated with each other and aggregated into one final feature for patient-level classification. the key intuition behind this is to utilize the attention mechanism to exploit the correlation between different ct slices, and accomplish the refinement and aggregation based on the explored correlation. as shown in figure 6 , the head includes a feature refinement module with two layers and a feature aggregation module with one layer, the structures of which are similar. formally, for the feature refinement module, given a volume of ct image features with the feature dimension of d, we first utilize three parallel fc layers to map the input to three different feature spaces for dimension reduction (d < d) and self-attention [12] . then, we calculate the distance, as attention, between each pixel in different slices using features from the first two spaces, and refine the features from the last space based on the attention. finally, another fc layer is employed to expand the feature dimension back to d, so that the skip-connection operation [53] can be applied. similar to [59] , we also apply a multi-head mechanism 2 to strengthen the refinement ability. without loss of generality, we define the input volume feature as f ∈ r n×d , where n is the number of slices, and the overall formulation of our refinement module can be expressed as, where h indicates the correlation between each pixel in different slices, f θi indicates the i-th fc layer with parameter θ i , and we omit the reshape operation for simplicity. with regard to the feature aggregation module, its structure, as well as the equations, is similar to the refinement module, except that we remove the multi-head mechanism and the last fc layer, and replace the first fc layer with a learnable parameter k ∈ r 1×d , so that the total number of n ct image features can be aggregated into one. details can be found in figure 6 (c). 2) multi-scale learning: if the number of slices with lesions in the early stage is relatively small (i.e., < n ), this may result in key information being leaked when performing feature aggregation from n × d to 1 × d. therefore, we introduce a multi-scale learning mechanism to aggregate features from different scales. as illustrated in figure 6 (a), given a set of scales s = [s 1 , s 2 , . . . , s k ], for each scale s j , we first divide the input feature f ∈ r n ×d evenly into s j parts, then, a shared feature refinement and aggregation head is applied to each part. in the end, we concatenate a set of aggregated features from all parts of different scales, and feed it into one fc layer to reduce the dimension from k j=1 s k d to d as the final patient-level feature for classification. we implement our framework with pytorch [60] . all ct slices are resized to 512×512. we set the hyper parameters of d and h as 512 and 12 respectively, and use four scales in the feature refinement and aggregation head, i.e., s = [1, 2, 3, 4]. for training, the slice-level/patient-level classification network is trained with two/one nvidia 1080ti gpus for a total of 110/90 epochs, the initial learning rate is 0.01/0.001 and gradually reduces by a factor of 0.1 every 40/30 epochs. both classification networks are trained separately with the standard cross-entropy loss function. random flipping is adopted as data argumentation. during inference, our system is an endto-end framework since the input of the patient-level classification network is the output of the slice-level one, so that it can be applied effectively. we additionally set the window center as [−700, −600, −500] for multi-scale window-leveling and average the final predicted features/scores for assessment. for the statistical analysis, we apply lots of metrics to thoroughly evaluate the performance of the model, following standard protocol. concretely, 'sensitivity', known as true positive rate (tpr), indicates the percentage of positive patients with correct discrimination. referred as true negative rate (tnr), 'specificity' represents the percentage of negative persons who are correctly classified. 'accuracy' is the percentage of the number of true positive (tp) and true negative (tn) subjects. 'false positive/negative error' (fpe/fne) measures the percentage of negative/positive persons who are misclassified as positive/negative. 'false disease prediction error' (fdpe) calculates the percentage of positive persons whose disease types (i.e., covid-19, h1n1 or cap) are predicted incorrectly. receiver operating characteristic curves (roc) and area under curves (auc) are used to show the performance of classifier. we also report the p-values compared our model with other competitors to demonstrate the significance level 3 . the main purpose of our system is to assist the diagnosis of covid-19 at a patient level rather than slice level [40] [44] , which is more significant and practical in the real-world applications. therefore, we first evaluate our system on the patient-level testing set with 102 healthy, 96 covid-19, 41 h1n1 and 53 cap cases. the competitors include one 2d cnn based method of covnet [30] and three 3d cnn based models as med3d-50 [61] , med3d-18 [61] and decovnet [62] . the results are shown in table ii and figure 7 . for the overall performance, our system achieves 95.21% on accuracy, with only 2.83% and 4.15% in false positive error and false negative error, respectively. although it may be difficult for clinicians to differentiate covid-19 from other kinds of viral pneumonia or cap pneumonia according to ct features, our system, as expected, only gets confused on a small number of cases, i.e., 1.57% in false disease prediction error, which beats the second best model by a margin of 3.05%. similarly, for h1n1, it obtains approximately 99.6% in both sensitivity and specificity, which is definitely a promising performance. moreover, our system significantly improves the sensitive of covid-19 from 95% to 99% comparing with med3d-18 and decovnet. however, we observe that the sensitivity or specificity of healthy is relatively inferior to med3d-18 and decovnet by approximately 2∼4 points, it seems our model is a little oversensitive to noise. on the other hand, med3d-50 achieves much worse performance at most of metrics unexpectedly, especially in sharp contrast to med3d-18. our explanation is that it may be difficult to train a 3d cnn with such large parameters and limited dataset, which is consist with our motivation of using 2d cnn based network. in addition to performance, we also compare the computation cost between our system and other competitors. as shown in table iii , our m 3 lung-sys takes full advantage of training data (ct slices and volumes) and has the lowest computation cost, including training time and gpu requirement. combining with the results in table ii and table iii , our method can achieve better performance with less computing resources, which is more practical for assisting diagnosis. 2) slice-level performance: another advantage of our proposed m 3 lung-sys is that we can flexibly switch whether the input is ct slices or volumes, i.e., slice-level or patientlevel diagnosis. naturally, we further evaluate our model on slices, using the total of 48,818 ct slices from the four categories (i.e., healthy, covid-19, h1n1 and cap) for testing. as shown in figure 8 , our model achieves 98.40%, 98.99%, 100.00% and 94.58% in auc for the four categories, respectively. this strongly demonstrates the superiority of our proposed m 3 lung-sys on slice-level diagnosis. d. ablation study 1) improvements in patient-level classification network: it is worth mentioning that our proposed slice-level classification network is strong and the extracted features are very discriminative. even without parameters, simple mathematic operations can obtain competitive results on patient-level diagnosis. meanwhile, the proposed multi-scale mechanism and refinement and aggregation head are able to further boost performance. to verify this, as shown in table iv, we conduct experiments to demonstrate improvements with different variants of the patient-level classification network. more specifically, 'non-parametric assessment' denotes a simple variant without parameters for differential diagnosis (we refer readers to i for details). 'max pooling' indicates that the input features are directly aggregated by a max pooling operation. 'single-scale + a. head' refers to a variant without the multi-scale mechanism (i.e., s=[3]) and feature refinement module. 'multi-scale + a. head' is similar to the previous model but applies the multi-scale strategy (i.e., s= [1, 2, 3, 4] ). from the results in table iv and figure 9 , we highlight the following observations: (1) using only the non-parametric assessment method, we can achieve competitive results of 94.18% in accuracy, which suggests the stronger feature representations acquired by our slice-level classification network. however, a big performance gap between 'false positive error' and 'false negative error' also reflects its inferior robustness, since a higher value of hyper-parameter t may result in more healthy cases being misdiagnosed due to some noise. (2) from the results in the second row to the last, the performance on all metrics improves gradually with more and more specifically designed components, which clearly demonstrates the benefits of our proposed feature refinement and aggregation head and multi-scale mechanism. (3) we notice that the false positive error gets worse when applying the method of 'single-scale + a. head', and it decreases dramatically when involving multi-scale mechanism. we argue that this does not suggest the inferiority of our proposed 'a. head' (since the overall accuracy is improved by 1%), but reflects the importance and rationality of the multi-scale mechanism. 2) improvements in slice-level classification network: to explicitly demonstrate the advantages of our improvements in slice-level classification network, we compare it with several competitors on patient-level diagnosis. without loss of generality, we choose the method of non-parametric assessment with t = 0.99 as the patient-level classification network. concretely, since the backbone of our slice-level classification network is resnet-50, which is widely adopted by other works [30] , [42] , we directly train a vanilla resnet-50 for fourway classification as a baseline. based on this, we conduct further experiments by gradually adding different improvements, including multi-task learning, lesion location (attention) maps and coordinate maps. all results are listed in table v . compared with the baseline, our model achieves significant improvement in all four metrics. for example, the overall accuracy is improved from 89.04% to 94.18% and the proportion of false positive is reduced effectively by 12 points. although we obtain a few more failure cases on false disease predication when utilizing the multi-task mechanism, the number of both false positive and false negative samples is dramatically reduced. the twotask approach acting like a radiologist is expected to better distinguish between healthy people and patients. furthermore, if we introduce the coordinate maps, fewer positive samples are misclassified as the wrong type of disease and some negative cases with noise are correctly diagnosed as positive, resulting in a decrease in both false positive and false disease prediction error. these results clearly suggest that the compo-nents in slice-level classification network play important roles in extracting discriminative features, e.g., attention maps for awareness of small lesions, and coordinate maps for capturing location diversity among different types of pneumonia. as one of its contributions, our m 3 lung-sys can implement lesion localization using only category labels, i.e., weaklysupervised learning. to qualitatively evaluate this, we randomly select several ct slices of three categories from the testing set, and show the visualizations of lesion location maps in figure 10 . for each group, the left image is the raw ct slice after lung cropping, and the right image depicts the detected area of abnormality. note that a warmer color indicates that the corresponding region has a higher probability of being infected. several observations can be made from figure 10 : 1) first of all, the quality of location maps are competitive. all highlighted areas are concentrated on the left or right lung region, and most abnormal manifestations, such as ground-glass capacities (ggo), are completely captured by our model, which is trained without any pixel-level or bounding box labels. in addition, there is no eccentric area being mistaken as a lesion, such as vertebra, skin or other tissues. our system can even precisely detect small lesions with relatively high response, as shown in the top-right image of figure 10 (a). above all, our system can achieve visual localization of abnormal areas with good interpretability, which is crucial for assisting clinicians in diagnosis and improving the efficiency of medical systems. 2) second, we found that the location map results are consist with the experience or conclusions of radiologists. several studies have found that covid-19 typically presents ggo with or without consolidation in a predominantly peripheral distribution [63] [65] , which has already been used as guidance for covid-19 diagnosis endorsed by the society of thoracic radiology, the american college of radiology, and rsna [66] . in contrast, h1n1 pneumonia most commonly presents a predominantly peribronchovascular distribution [67] , [68] . lobar or segmented consolidation and cavitation suggest a bacterial etiology [10] . therefore, the visualizations of location maps can reflect the characteristics of lesion distributions in some ways, which may be a valuable indicator for clinicians in analyzing or differentiating these three diseases. to fully demonstrate the practicability and effectiveness of our system, we further simulate its real-world application and present the outputs, i.e., the diagnosis assessment of diseases and the localization of lesions. concretely, we randomly select two covid-19 patients, and feed their ct exams into our system. the full outputs are illustrated in figure 11 and 12. due to page limitation, we show a ct sequence by sampling every five slices. the lesion location maps, with the predicted slice-level diagnosis on the upper right, are attached at the bottom of the corresponding raw ct slices. at the end of the sequence, the probability of each category predicted by the patient-level classification network is provided (i.e., 0: healthy; 1: covid-19; 2: h1n1; 3: cap). as can be seen, our system can accurately locate the lesion areas in each slice, 4 and these areas also have good continuity in sequence, which are very important characteristics to assist the clinician in diagnosing covid-19. in this paper, we proposed a multi-task multi-slice deep learning system (m 3 lung-sys) to assist the work of clinicians by simultaneously screening multi-class lung pneumonia, i.e., healthy, covid-19, h1n1 and cap, and locating lesions from both slice-level and patient-level ct exams. different from previous studies, which incur high hardware, time and data costs to train 3d cnns, our system divides this procedure into two stages: slice-level classification and patientlevel classification. we first utilize the slice-level classification network to classify each ct slice. an introduced multitask learning mechanism makes our model diagnose like a radiologist, first checking whether each ct slice contains any abnormality, and then determining what kind of disease it is. both attention maps and coordinate maps are further applied to provide awareness of small lesions and capture location diversity among different types of pneumonia. with the improvements provided by these components, our system achieves a remarkable performance of 94.18% accuracy, 10.66% false positive error and 0.0% false negative error. then, to recover the temporal information in ct sequence slices and enable ct volumes screening, we propose a patientlevel classification network, taking as input a volume of ct slice features extracted from the slice-level classification network. such a network can achieve the feature interaction and aggregation between different slices for patient-level diagnosis. consequently, it further promotes our system by dramatically reducing the cases of false positive and false disease predication and improving the accuracy by 1.1%. above all, from a clinical perspective, our system can perform differential diagnosis with not only a single ct slice, but also a ct volume. the combined outputs of risk assessment (predicted diagnosis) and lesion location maps make it more flexible and valuable to clinicians. for example, they can easily estimate the percentage of infected lung areas or quickly check the lesions at any time before making a decision. as illustrated in figure 11 and 12 , with these lesion location maps, clinicians are able to understand why the deep learning model gives such prediction, not just face a statistic. even with the outstanding performance, there are three limitations remaining in our system that need to be improved. first of all, it is very easy for clinicians to distinguish covid-19 from healthy cases. however, from table 2 , we find that our system may still misclassify some healthy people. we examine failure cases and find that the main challenge lies in the pulsatile artifacts in pulmonary ct imaging. second, our framework, which contains slice-level and patient-level classification networks, is not end-to-end trainable yet. although it just increases the negligible training and testing time, we hope the end-to-end training manner would be more conducive to the learning and combination of spatial and temporal information. third, our proposed localization maps accurately show the location of abnormal regions, which are valuable to clinicians in assisting diagnosis. however, they still lack the ability to automatically visualize the unique lesions' distributions for each disease. in the future, we will attempt to tackle the first and third limitation by improving the attention mechanism to enhance the feature representations. besides, developing the technology of coordinate maps may be an optimal option. frankly speaking, the third obstacle is a very challenging and ideal objective, but we will continuously promote research along this line. as for the second limitation, we are going to polish our method into a unified framework, through such training mechanism, both spatial and temporal information can complement each other. in conclusion, we proposed a novel multi-task multi-slice deep learning system (m 3 lung-sys) for multi-class lung pneumonia screening from chest ct images. different from previous 3d cnn approaches, which incur a substantial training cost, our system utilizes two 2d cnn networks, i.e., slicelevel and patient-level classification networks, to handle the discriminative feature learning from the spatial and temporal domain, respectively. with these special designs, our system can not only be trained with much less cost, including time, data and gpu, but can also perform differential diagnosis with either a single ct slice, or a ct volume. more importantly, without any pixel-level annotation for training, our system is able to simultaneously output the lesion localization for each ct slice, which is valuable to clinicians for diagnosis, allowing them to understand why our system gives a particular prediction, rather than just being faced with a statistic. according to the remarkable experimental results on 292 testing cases with multi-class lung pneumonia (102 healthy people, 96 covid-19 patients, 41 h1n1 patients and 53 cap patients), our system has great potential for clinical application. given a volume of ct exam with n slices in sequence, we feed them into our slice-level classification network to obtain two kinds of probabilities for each slice, p lesion = (p 0 li , p 1 li ) n i=1 and p multi−class = (p 0 mi , p 1 mi , p 2 mi , p 3 mi ) n i=1 , where p lesion means the probability whether a ct slice contains any lesion or not, and p multi−class denotes the predicted probability of multi-class pneumonia assessment (i.e., 0: healthy; 1: covid-19; 2: h1n1; 3: cap). then, we derive the final probabilities of four classes for each slice from p lesion and p multi−class , which can be expressed as follows, intuitively, if all or most of slices are predicted as health, the patient has a very high chance of being healthy. otherwise, he will be diagnosed as either covid-10, h1n1 or cap, according to ct imaging manifestation. to simulate this process, our proposed non-parametric holistic assessment on patient-level can be formulated as follows, where n k = n i=1 δ(max p i − p k i ) and δ( * ) = 1 if * = 0, otherwise, δ( * ) = 0. t is a hyper-parameter to control the degree of 'most of', that is, the proportion of healthy (normal) slices. normally, the chest ct slices of healthy people should be nearly or completely all normal. therefore, without loss of generality, in this paper, we set it as a reasonable and acceptable value, i.e., t = 0.99. epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (covid-19) during the early outbreak period: a scoping review a familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-toperson transmission: a study of a family cluster correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases ct imaging features of 2019 novel coronavirus (2019-ncov) imaging changes in patients with 2019-ncov sensitivity of chest ct for covid-19: comparison to rt-pcr essentials for radiologists on covid-19: an update-radiology scientific expert panel covid-19 pneumonia: ct findings of 122 patients and differentiation from influenza pneumonia pulmonary invasive fungal disease and bacterial pneumonia: a comparative study with high-resolution ct imagenet classification with deep convolutional neural networks non-local neural networks leader-based multi-scale attention deep architecture for person re-identification fm2u-net: face morphological multi-branch network for makeup-invariant face verification hybrid task cascade for instance segmentation identification of autism spectrum disorder using deep learning and the abide dataset classification of autism spectrum disorder by combining brain connectivity and deep neural network classifier early diagnosis of alzheimer's disease with deep learning ensembles of deep learning architectures for the early diagnosis of the alzheimer's disease deep learning in alzheimer's disease: diagnostic classification and prognostic prediction using neuroimaging data aggnet: deep learning from crowds for mitosis detection in breast cancer histology images diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer spatiotemporal breast mass detection network (md-net) in 4d dce-mri images using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy refuge challenge: a unified framework for evaluating automated methods for glaucoma assessment from fundus photographs age challenge: angle closure glaucoma evaluation in anterior segment optical coherence tomography computer aided lung cancer diagnosis with deep learning algorithms end-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct automatic segmentation of liver tumor in ct images with deep convolutional neural networks deep learning-based image segmentation on multimodal medical imaging hi-net: hybrid-fusion network for multi-modal mr image synthesis a deep learning algorithm using ct images to screen for corona virus disease (covid-19) a fully automatic deep learning system for covid-19 diagnostic and prognostic analysis inf-net: automatic covid-19 lung infection segmentation from ct scans towards efficient covid-19 ct annotation: a benchmark for lung and infection segmentation review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19 lung infection quantification of covid-19 in ct images with deep learning radiologist-level covid-19 detection using ct scans with detail-oriented capsule networks weakly supervised deep learning for covid-19 infection detection and classification from ct images deep learning system to screen coronavirus disease 2019 pneumonia development and evaluation of an ai system for covid-19 diagnosis deep learning for detecting corona virus disease 2019 (covid-19) on high-resolution computed tomography: a pilot study a deep learning system that generates quantitative ct reports for diagnosing pulmonary tuberculosis rapid ai development cycle for the coronavirus (covid-19) pandemic: initial results for automated detection & patient monitoring using deep learning ct image analysis a weakly-supervised framework for covid-19 classification and lesion localization from chest ct a quantitative theory of the hounsfield unit and its application to dual energy scanning large-scale screening of covid-19 from community acquired pneumonia using infection size-aware classification method and device for automatic segmentation of a digital image using a plurality of morphological opening operation a simple and efficient connected components labeling algorithm deep residual learning for image recognition imagenet: a large-scale hierarchical image database multitask learning learning deep features for discriminative localization chest ct findings in coronavirus disease-19 (covid-19): relationship to duration of infection an intriguing failing of convolutional neural networks and the coordconv solution attention is all you need automatic differentiation in pytorch med3d: transfer learning for 3d medical image analysis a weakly-supervised framework for covid-19 classification and lesion localization from chest ct radiological findings from 81 patients with covid-19 pneumonia in wuhan, china: a descriptive study emerging 2019 novel coronavirus (2019-ncov) pneumonia comparison of hospitalized patients with acute respiratory distress syndrome caused by covid-19 and h1n1 radiological society of north america expert consensus statement on reporting chest ct findings related to covid-19. endorsed by the society of thoracic radiology, the american college of radiology, and rsna computed tomography findings of influenza a (h1n1) pneumonia in adults: pattern analysis and prognostic comparisons pneumonia in novel swine-origin influenza a (h1n1) virus infection: high-resolution ct findings the authors would like to thank all of clinicians, patients and researchers who gave valuable time effort and support for this project, especially in data collection and annotation additionally, we appreciate the contribution to this paper by wenxuan wang (for the help of paper revision), junlin hou for her suggestion in 3d baselines) and longquan jiang (for the assistance of data processing). key: cord-252343-a85wz2hs authors: skoda, eva-maria; teufel, martin; stang, andreas; jöckel, karl-heinz; junne, florian; weismüller, benjamin; hetkamp, madeleine; musche, venja; kohler, hannah; dörrie, nora; schweda, adam; bäuerle, alexander title: psychological burden of healthcare professionals in germany during the acute phase of the covid-19 pandemic: differences and similarities in the international context date: 2020-08-07 journal: j public health (oxf) doi: 10.1093/pubmed/fdaa124 sha: doc_id: 252343 cord_uid: a85wz2hs background: healthcare professionals (hps) are the key figures to keep up the healthcare system during the covid-19 pandemic and thus are one of the most vulnerable groups in this. to this point, the extent of this psychological burden, especially in europe and germany, remains unclear. this is the first study investigating german hps after the covid-19 outbreak. methods: we performed an online-based cross-sectional study after the covid-19 outbreak in germany (10–31 march 2020). in total, 2224 hps (physicians n = 492, nursing staff n = 1511, paramedics n = 221) and 10 639 non-healthcare professionals (nhps) were assessed including generalized anxiety (generalized anxiety disorder-7), depression (patient health questionnaire-2), current health status (eq-5d-3l), covid-19-related fear, subjective level of information regarding covid-19. results: hps showed less generalized anxiety, depression and covid-19-related fear and higher health status and subjective level of information regarding covid-19 than the nhps. within the hp groups, nursing staff were the most psychologically burdened. subjective levels of information regarding covid-19 correlated negatively with generalized anxiety levels across all groups. among hps, nursing staff showed the highest and paramedics the lowest generalized anxiety levels. conclusions: in the context of covid-19, german hps seem to be less psychological burdened than nhps, and also less burdened compared with existing international data. the covid 19 pandemic reached germany in late february 2020. it brought not only objective medical challenges for healthcare professionals (hps), but also reports and findings from other more affected countries. due to exponentially increasing case numbers and large numbers of patients requiring intensive care, those more affected countries are facing unexpected challenges. countries such as china, italy, spain, brasil and the usa were and are currently reaching the limits of their healthcare systems in the context of this pandemic: something that was previously unimaginable in industrialized countries. 1 such a development seems to have been avoided in germany but is not completely ruled out for the future. in the face of an ever-renewing european and a further worldwide escalation, there is no shortage of uncertainty and concern among hps. it is already known from countries other than germany that hps are under elevated psychological stress during the covid-19 pandemic and show increased levels of various psychometric values, including anxiety and depression. 2-5 existing evidence, e.g. from china, already shows the extent of the psychological burden on hps. front-line healthcare workers were identified as bearing a particularly heavy psychological burden. 2,6 however, these studies were conducted during the extreme stress phase of the covid-19 epidemic in china. only few data in the context of other studies suggest that, e.g. in the uk, a heightened psychological burden for the hps may exist. 7 there is, as yet no comparable data, especially from a time when the health system is still mainly coping normally, alongside already population-wide uncertainty, particularly in europe. the german situation to this point is 2-fold: continuing and past restrictions in public life, contact restrictions, empty supermarket shelves and daily updated increasing case numbers are still coupled with a hospital system that is and was largely still able to cope normally. this is combined with mortality rates, which are, for the moment, low when compared internationally. 8, 9 though, the german population shows itself already burdened in terms of generalized anxiety, depression and distress, which is in line with evidence from other countries, 10,11 customized low-threshold interventions, offline as well as online, are needed and already implemented. [12] [13] [14] the aim of this study was to close the research gap and provide initial findings on psychological burden of german hps after the covid-19 outbreak. it is hypothesized that the group of hp in germany will mirror the existing, population-wide elevated psychological burden 15 to an even greater extend by being in the 'front line', as already could be observed in previous studies in other countries. 2,3 a nationwide, online-supported cross-sectional survey was conducted. participants were recruited via online channels and official channels e.g. websites of clinics. the survey period was from the 10-31 march 2020. it was during this period that the first increased numbers of covid-19 cases in germany, increasingly restrictive government regulations, the closure of european borders and the restriction of individual freedoms occurred. in total, 12 863 people completed the questionnaire, of which we identified 2224 people in the medical sector as hps and 10 639 as non-healthcare professionals (nhps). hps were from three different groups: physicians, nursing staff and paramedics. the sample description can be seen in table 1 . all participants gave their written consent to participate in the survey and the evaluation of the collected data. the study was conducted in accordance with the ethical guidelines from the declaration of helsinki and was approved by the local ethics committee of the faculty of medicine. details of general socio-demographic variables were asked. validated psychometric instruments were used to assess psychological burden. the generalized anxiety disorder-7 (gad-7) to measure generalized anxiety symptoms over the course of the last 2 weeks (gad-7, 7 items, 4-point likert scale meaning 0 = never to 3 = nearly every day), 16 the patient health questionnaire-2 (phq-2) to screen for depression symptoms over the course of the last 4 weeks (phq-2, 2 items, 4-point likert scale meaning 0 = never to 3 = nearly every day) 17 and the visual analogous scale of the euroqol eq-5d-3l scale to assess current health status (ranging from 0 [worst imaginable health status] to 100 [best imaginable health status]). 18 additionally, based on scientific and media reports, multiple items and item scales were formed in expert consensus with regard to 'covid-19-related fear' (one item, 7-point likert scale meaning 1 = very low to 7 = extremely high), 'the subjective level of information regarding covid-19' (3 items: i feel informed about covid-19; i feel informed about measures to avoid an infection with covid-19; i understand the health authorities' advice regarding covid-19. seven-point likert scale, meaning 1 = complete disagreement to 7 = complete agreement). scale reliability for was tested using cronbach's α for internal consistency. 'the subjective level of information regarding covid-19' showed high internal consistency (cronbach's α = 0.801). the descriptive and inferential statistics were performed with r3.6.1 (r core team, 2019). sum scores for the gad-7 and phq-2 and mean scores for all other scales were calculated. to assess the hypotheses, the 95% confidence of the association measures are reported; for each difference between the groups after having assessed the global mean difference in the respective scale. hence, the assumptions were assessed based on their precision. [19] [20] [21] generally, test statistics and p values are not reported given that at this sample size even the slightest deviation from equivalence results in extremely low p values. when the confidence interval (ci) of the effect size covers 0, we assume there is no effect. as soon as this is the case, we use the guidelines by sawilowsky 22 to evaluate the importance of the effect; a cohen's d ∼0.2 is considered a small, a d ∼0.5 is considered medium-sized and d ∼0.8 is regarded as large effects. due to the large sample size and the intuitive and common interpretation of the effect sizes, parametric methods were also used for violation of the normality assumption. 23 for mean comparisons welch's t-test with the cohen's d association measure was used, for multiple mean comparisons and between-subject analysis of variance with the association measure η 2 with subsequent t-tests for post hoc comparisons with tukey error correction. a complete summary of all post hoc group comparisons after calculation of the variance analyses and post hoc tests can be assessed in the supplementary materials. to clear the association of subjective level of information regarding covid-19 and other variables, spearman correlations between variables were performed. to subsequently test the interdependence of variables a robust linear mestimator regression was performed (rlm from the r package mass, 2002). all spearman correlations including confidence between the measures are provided in the supplemental material. following the results of the correlation analyses, prevalence ratios for the amount of participants with moderate generalized anxiety in relation to the subjective level of information regarding covid-19 were explored. levels of generalized anxiety were divided by using the gad-7 sum score of ≥10 24 as a split into low levels of generalized anxiety (<10) and moderate to high levels of generalized anxiety (≥10). this was compared with a pre-covid-19 standard population, where 5.9% of the population scored above ≥10. 25 the subjective level of information regarding covid-19 was split by the median into high (≥median) and low (